While studying examples is known to benefit other skills, providing model texts in writing tasks has often produced mixed results. We pursued the effects of models by varying their quality and labeling. Ninety-five psychology majors were given basic facts, including relevant and irrelevant information, on one of two simple experiments and wrote a method section. The control group (n=22) saw no models. The models groups (n=73) saw three student-written method sections--either three good models (AAA) or one good, one moderate, and one poor model (ABC). Half of each quality group saw the models labeled with their grades and the other half saw them unlabeled. The students' texts were rated holistically and analyzed for content. The models group's texts were rated as better organized than those of the control group. The models also influenced content. Seeing a proposition in the models increased the likelihood that students would include it in their texts. This effect was smaller for propositions appearing only in moderate or poor models. For the more difficult writing topic, the models group included more topical information than the control group, including more essential propositions but also more unnecessary ones. No systematic benefits emerged from labeling the models or providing only good models. Students seemed able to judge the relative quality of the models, even without labels. Overall, providing models seems to increase the salience of topical information they contain.