Table 4: 

Model and performance details for studies testing on high-expertise and non-narrative domains. Fine method categories used in these studies include feature augmentation (FA), loss augmentation (LA), ensembling (EN), pretraining (PT), parameter initialization (PI), and pseudo-labeling (PL).

StudyMethodPerformance
(Arnold et al., 2008) Manually constructed feature hierarchy across domains, allowing back off to more general features (FA) Positive transfer from 5 corpora (biomedical, news, email) to email 
(McClosky et al., 2010) Mixture of domain-specific models chosen via source-target similarity features (e.g., cosine similarity) (EN) Positive transfer to biomedical, literature and conversation domains 
(Yang and Eisenstein, 2015) Dense embeddings induced from template features and manually defined domain attribute embeddings (FA) Positive transfer to 4/5 web domains and 10/11 literary periods 
(Xing et al., 2018) Multi-task learning method with source-target distance minimization as additional loss term (LA) Positive transfer on 4/6 intra-medical settings (EHRs, forums) and 5/9 narrative to medical settings 
(Wang et al., 2018) Source-target distance minimized using two loss penalties (LA) Positive transfer to medical and Twitter data 
StudyMethodPerformance
(Arnold et al., 2008) Manually constructed feature hierarchy across domains, allowing back off to more general features (FA) Positive transfer from 5 corpora (biomedical, news, email) to email 
(McClosky et al., 2010) Mixture of domain-specific models chosen via source-target similarity features (e.g., cosine similarity) (EN) Positive transfer to biomedical, literature and conversation domains 
(Yang and Eisenstein, 2015) Dense embeddings induced from template features and manually defined domain attribute embeddings (FA) Positive transfer to 4/5 web domains and 10/11 literary periods 
(Xing et al., 2018) Multi-task learning method with source-target distance minimization as additional loss term (LA) Positive transfer on 4/6 intra-medical settings (EHRs, forums) and 5/9 narrative to medical settings 
(Wang et al., 2018) Source-target distance minimized using two loss penalties (LA) Positive transfer to medical and Twitter data 
Close Modal

or Create an Account

Close Modal
Close Modal