Approved by Faculty Senate

University Studies Course Proposal

Department or Program: Psychology
Course Number / Title: P308 / Experimental Psychology
Number of Credits: 5
Frequency of Offering: Every semester
Introduction to the scientific methods and research
Catalog Description: techniques in psychology. Laboratory experiences are included.
This is an existing course previously approved by A2C2 Yes
This is a new course proposal No

 

University Studies Approval is requested in Unity & Diversity: Critical Analysis course
Department Contact: Carrie Fried, Assistant Professor
Email Address cfried@winona.edu

Psychology 308, Experimental Psychology, 5 s.h.

Unity & Diversity: Critical Analysis course

Proposal and Rationale

Catalog Description: Introduction to the scientific methods and research techniques in psychology. Laboratory experiences
are included.

General Course Information:

Psychology 308 (Experimental Psychology) is being proposed as a Unity and Diversity Critical Analysis Course within the University Studies Program. General Psychology and Statistics are prerequisites for the course. The course has 3 hours of class-time and 4 hours of lab a week. The intent of P308 is to teach students the different aspects of conducting research in psychology. These aspects include: 1) Understanding the logic behind experimental research, 2) Learning the basic methodological issues of different types of research, 3) Learning to critique research and spot flaws in research design, 4) Reviewing and applying skills in data and statistical analysis, and 5) Learning how to write-up an empirical research paper. Class-time typically consists of lectures supplemented by discussions, case studies, and examples to help students apply
course material. Lab sessions typically give students hands on experience designing studies, designing measures, conducting research, analyzing data, and writing. All students carry out several complete (though small) experiments. This includes developing a hypothesis, designing the study, developing materials, collecting and analyzing data, and writing a final research report.

The course focuses on experimental methodology in psychological research, but also touches on alternative scientific methodologies. Core topics covered in the course include:

1) Scientific Thinking,

2) Theory and Hypothesis Testing

3) Experimental Methodology

4) Issues of Control and Confounding Variables in Experimental Methodology

5) Types of Validity (e.g., Internal, External, Construct) in Experimental Research

6) Measurement Issues such as Validity and Reliability

7) Within Subject, Between Subject, and Mixed Experimental Designs

8) Factorial Designs

9) Statistical Analysis (Descriptive & Inferential Statistics)

10) Writing in APA Format

11) Quasi Experimental Designs

12) Introduction to Non-Experimental Methodology (e.g., Surveys, Observations, Quasi-Experiments)

13) Non-Experimental Methodology Issues (e.g., Survey Sampling, Sample Bias, and Sample Error)

14) Drawing Appropriate Conclusions from Different Forms of Research

15) Psychology Research in Applied Settings.

Specific Outcomes for USP Unity and Diversity Critical Analysis courses:

A: Promote students' ability to evaluate the validity and reliability of information:

One of the primary goals of this course is teaching students how to evaluate whether the information provided in psychological
research is reliable and the conclusions drawn from the research are valid. This outcome is addressed at several different levels
and from many different angles. For example, at a micro-level, these issues are addressed by teaching students to evaluate the
validity and reliability of measures. In measurement, these terms have specific definitions involving what the measure is actually
measuring (validity) and how well – or consistently and objectively – it is measuring it (reliability). This area has a long history in
the field of psychology where the validity of important measures (e.g., intelligence) are often hotly debated and other measures
(e.g., projective personality tests) are notoriously unreliable. Students are exposed to these ideas through lecture and discussions
of the validity and reliability of actual measures. Students also develop these skills by coming up with their own measures while
juggling issues of reliability and validity.

At a more macro-level, replication in research (as a way to evaluate reliability of findings) is a central feature in basic scientific thinking
and theory building. Students are often introduced to these ideas through lectures on the "scientific method". They also explore the
issue by discussing examples of where replications (or failures to replicate) have altered theories or psychological principles.

Ideas of validity permeate this course. A central theme in psychological research is dealing with the sometimes conflicting concepts
of internal and external validity. Briefly, internal validity has to do with whether the independent variable in an experiment (cause) is
solely responsible for differences in the dependent measure (effect). For example, does exposing an experimental group to violent video
games cause those participants to behave more aggressively than a comparison group who were not exposed to the violent video games.
Or can the differences between the groups be attributable to other factors (e.g., individual differences, effects of simply playing any video game). Students learn the fundamentals of internal validity by learning the basics of experimental research, studying previously conducted experiments, and designing their own experiments. Understanding internal validity is key to understanding experimental research and
research design issues. Students learn to evaluate internal validity by looking for the factors that violate the underlying assumptions
of an experimental design. This is often done through analyzing and critiquing designs as explained in Outcome C on the following page.

External validity, on the other hand, refers to whether research findings extend to other populations (aside from those in the study) or are applicable outside the laboratory. External validity is an important issue in psychological research because research findings often have real-world application, such as in the areas of mental health or organizational psychology. Conceptually and logically internal and external validity are independent issues, practically they are often conflicting in that most types of research either stress external validity at the
expense of internal, or the other way around. The course stresses that students understand the differences in types of validity and also
how to evaluate the different forms of validity on experiments. Students often learn about external validity and the potential conflict between
the two forms of validity by examining processes of doing research in applied settings and / or applying laboratory research to solving real-world problems.

B: Promote students' ability to analyze modes of thought, expressive works, arguments, explanations, or theories:

Because teaching students how to conduct psychological research inherently involves getting students to analyze research at many levels, this objective is addressed in many ways throughout the course. The following are examples of the activities that address this objective. 1) In examining where research ideas come from, students learn that sometimes a psychological phenomenon is explained differently by various theories. Students examine how competing theories may result in competing hypotheses for identical situations, and how a well designed study may be able to distinguish between the two or more theories. 2) Students read previous published research studies and report on the theories, methods, and results (a task students often find very difficult). Sometimes this is done in the form of stand-alone assignments and often this is done as part of writing the literature reviews for their own research reports. 3) As is extensively documented in this proposal, students learn why and how specific aspects of research designs can rule out types of alternative explanations. For example, how certain control conditions must be used to adequately test certain hypotheses. 4) Students are exposed to various forms of research and examine how those forms of research are use to test different theories, provide different levels of explanation, and are used in different contexts. For example, the level of explanation provided by a case-study or survey will be very different from the level of explanation provided from a tightly controlled laboratory experiment, even if they are both designed to study the same underlying phenomenon. 5) Finally, students are exposed to issues of research methodology through several different modes of thinking. These include mathematical thinking about statistical analysis of data, logical thinking about research design, and conceptual thinking about theories and hypotheses.

C: Promote students' ability to recognize possible inadequacies or biases in the evidence given to support arguments or conclusions:

The primary goal of this course is to teach students how to do research correctly so that they can draw appropriate conclusions and support their claims. Central to learning this lesson is learning how and why research done incorrectly may lead to faulty conclusions. This is often done through critiquing research designs (often weak or flawed designs). The primary goal in these critiques is to teach students both to recognize flaws and weaknesses in methodology and to understand how and why these flaws might draw into question the results and conclusions. Several potential areas of weakness are stressed. For example, when conducting experiments the lack of true random assignment of participants to conditions or the lack of adequate control of possible confounding variables (unwanted systematic differences between conditions) makes it impossible to draw causal inferences. Students are taught how to spot these flaws but also to understand why they draw into question cause-effect conclusions. Other potential flaws include incorrect balancing of the order in which trials are presented in within subject designs (experiments where subjects participate in more than one condition), incorrect controls of demand characteristics and experimenter effects (e.g., placebo effects or expectancy biases), using inappropriate control or comparison groups, or using invalid or unreliable variables in measuring outcomes. Again, the goal of these critiques is to teach students both to recognize flaws and to understand the ways in which the flaws may undermine the conclusions. These activities also reinforce students understanding of the methodological basics involved in psychological research. This outcome is typically achieved through discussing published research, assigning homework that require students to identify flaws in research designs, and having students design experiments that are then critiqued by the class.

Skills learned by these activities extend beyond the course-emphasized academic research to help students critically evaluate questionable research findings they are likely to encounter in their lives. This includes so-called "pop-psychology" or "pop-science" that are often reported in the media or become the center of fads or social-movements. Examples that are sometimes discussed in class include claims of "healing touch" or other forms of alternative medicine that have become centers of debate lately.

D: Promote students' ability to advance and support claims:

There are primarily three places where this outcome is met: designing experiments that will test specific hypothesis and rule out alternate hypotheses, conducting appropriate statistical tests and correctly interpreting and reporting the results, and writing a research report.

It has already been discussed elsewhere in this proposal how students are taught the mechanics of experimental design and psychological research. Students in this course also design and conduct their own experiments. They must develop measurement tools, designs, and procedures that allow them to support their hypothesis and rule out alternative hypothesis. In doing this, they must work through the validity & reliability of their measures, the adequacies of their controls for confounding variables, etc. They must develop appropriate procedures that match the type of design used (e.g., using proper counter-balancing designs). The desired outcome is to provide students with the skills to conduct research which will provide valid and reliable support for claims.

Another area where this particular outcome is addressed is in the statistical analysis techniques taught to students and used by students in the class. Students learn what tests to use to infer statistical significance depending on the type of data (frequencies, continuous scales, etc.) and the type of relationship being tested (correlations between variables, differences between groups, etc.). In lab sessions, students get hands-on experience in conducting these tests and interpreting the results. They are also taught how to report statistical results to others through graphs, tables, and accurate and correct reporting of tests and results. These activities provide students with the ability to advance support for their claims through the use of statistics.

Finally, writing research reports in proper APA format is a skill stressed in the class. Students learn this by reading reports written in APA format and by actually writing reports in this format. APA format is a specific manuscript writing format developed by the American Psychological Association that stresses how to fairly and accurately report research findings so that they can be assessed by an audience. Aspects of these manuscripts include: A) Writing an introduction to support the hypothesis through a review and analysis of theory and previous research. B) Writing a methods section so that the audience can understand the procedures, variables, and controls used to test the hypothesis and rule out alternative hypotheses. C) Reporting results in ways that the statistical importance and the validity of statistical claims are supported for the audience.

Often, all of these activities are brought together when students design, conduct, and write-up their own research project. This may be done on a small scale several times a semester and often students conduct larger more independently developed final experimental research projects.

 

Course Description and Calendar for P308

Experimental Psychology

Instructor: Dr. Carrie Fried

Office: 231 F Phelps Hall, Office hours: M&W 10-11, 12-2, T 11-1, Th 3:45-5, F 10-11

Phone: 457-5483, email: cfried@winona.edu,

This course is a USP Unity and Diversity Critical Analysis courses:

Such courses are required to meet this following outcomes:

A: Promote students' ability to evaluate the validity and reliability of information:

B: Promote students' ability to analyze modes of thought, expressive works, arguments, explanations, or theories:

C: Promote students' ability to recognize possible inadequacies or biases in the evidence given to support arguments or conclusions:

D: Promote students' ability to advance and support claims:

These letters are used in the course calendar, course objectives, & throughout the syllabus to indicate places in the class where these outcomes are met.

Required Texts:
APA Publication Manual 4th edition, (APA publisher)
Research Methods (4th ed.) Graziano & Raulin, (Allyn & Bacon publishers)

Course goals and objectives:

1) Intro. to conducting research in psychology, w/ emphasis on the experimental method. (out: a, b, c, d)

2) Learn technical & methodological issues involved in quantitative research in psychology. (out: a, c, d)

3) Learn to critique research and spot flaws in research designs & conclusions (out: a, b, c )

4) Gain experience planning, designing, conducting, and "writing up" several experiments. (out: a, b, c, d)

5) Develop skills in doing library research and writing in the social sciences. (out: b, d)

6) Gain experience with non-experimental techniques like surveys, observations, etc. (out: a, b)

7) Review of the use of appropriate statistical techniques (out: b, d)

8) Gain experience using computers and software to conduct statistical analysis (out: b, d )

 Class-time:.Class time will be spent primarily in lectures, going over material and answering questions.

Lab Sessions: The lab sessions will give you a chance for hands on experience and active learning of the material. In the lab sessions you will do a little bit of everything from writing to designing and conducting studies, writing surveys, and graphing and analyzing data.

Workload: Keep in mind this is a 5 credit class. You should plan to spend twice as much time on this class as other 3 credit courses. Also, make every effort to make it to all classes and labs.

Grades and Evaluation: There are three components to this class. There will be 3 paper assignments in the semester. These will be write-ups of experiments. There will also be 2 exams covering the major topics in the course. Finally, there will be assignments (homework, things done in lab, reading log, etc.).

Assignments = 28% (labs, reading logs, etc. Some graded, same pass-fail)

Papers = 36% (first 10%, second 10%, final 16%)

Exams = 36% (18% each)

Labs & Homework: There will be numerous small assignments done in this class. Some will be done in the lab, others more like homework. There will be no make-up lab assignments. If you miss lab you will simply not get credit for that assignment. You will also have to read and report on a certain number of journal articles (explained later). All these assignments must be turned in on time to get credit.

 

Course Calendar (may change slightly as semester progresses)

Week Readings (in text) USP Outcome Major Assignments Due

1: Aug. 28-Sept. 1 1-11, Ch. 2, append C (outcomes: a, b)

Class Intro to scientific methods and research

________________________________________________________________________

2: Sept. 6-8 Ch. 2, pp. 174-183, Ch. 10 (outcomes: a, b, c, d)

Class: Intro to experimental methods (Lab: Library Searches)

________________________________________________________________________

3: Sept. 11-15 Ch. 10, Append B (outcomes: c, d)

Class: Homework 1, APA format (Lab: Study 1, designing experiments)

________________________________________________________________________

* 4: Sept. 18-22 Append. B Ch. 4 & 5 (outcomes: b, c, d)

Class: APA style/format Stats & Data (Lab: APA formatting, statistics, graphing)

________________________________________________________________________

5: Sept. 25-29 Ch. 11 (outcomes: a, c, d) Paper 1 due in lab W

Class: Stats, between vs. within subject designs: (Lab: data / graphing, paper 1 critique)

________________________________________________________________________

* 6: Oct. 2-6 Ch. 11 (outcomes: a, c)

Class Within Subject Designs (Lab: deigning / analyzing within sub designs, Study 2)

________________________________________________________________________

 

7: OCT 11-13 EXAM 1 on Wed. & Friday

Class: No class or lab MONDAY, No lab WED.)

________________________________________________________________________

* 8: Oct. 16-20 pp 64-71, 54-59

Class: Ethics (Lab: Ethics in research)

________________________________________________________________________

9: Oct. 23-27 Ch. 12 & 14 (outcomes: a, c, d) Paper 2 due in lab M

Class: Factorial designs, IRBs, (Lab: FP, paper 2 critique, factorials)

________________________________________________________________________

* 10: Oct. 30-Nov.3 194-196, 202-207 (outcomes: a, b, c)

Class: Demand characteristics & experimenter effects Validity (Lab: factorials FP proposals & IRB)

________________________________________________________________________

11: Nov 6-8 186-194 Ch. 9, Ch. 14. (outcomes: a, b, c)

Class Validity issues, quasi-experimental designs (Lab: reserch proposals, pilot testing)

________________________________________________________________________

* 12: Nov. 13-17 Ch. 13, Ch.7 (outcomes: a, c)

Class: correlational & non-experimental methods (Lab: data coding, non-exp. methods)

________________________________________________________________________

13: Nov. 20 Ch 6 & 7 (outcomes: a, b, c)

Class: non-experimental methods (Lab: non-exp. methods, surveys, data analysis)

________________________________________________________________________

* 14: Nov. 27 – Dec 1 Ch. 6 & 7 (outcomes: b, c) EXAM 2 on Wed. & Friday

Class: Non-Experimental methods, EXAMS (Lab: non-exp. methods, review)

________________________________________________________________________

15: Dec. 4-8 Working on Final Project (outcomes: d)

Class: MONDAY: sample oral presentation (Labs: Final Projects)

________________________________________________________________________

Final Exam (Presentation of final projects, turn in final papers)

Final papers due by 5:00 Monday Dec. 11, Final oral presentations Thurs. 14th 8 a.m.

 

Journal Logs

Read and write a brief report on 1 journal article a week. These articles can be ones you use in your papers (so this won’t present extra work when you are working on papers). Keep these in a separate notebook that can be turned in (e.g., a small spiral bound notebook or folder). They will be turned in every other Monday beginning on week 4 (* on calendar indicates journal logs due that week). NOTE: You will turn in 2 journal reports at a time, so 2 are due starting the 4th week.

Requirements: These must be articles published in Psychology journals. They must be research articles (as opposed to literature reviews or theory papers). When starting out, you might look up research papers that you are familiar with (e.g., ones that you have read about in text books in other classes). This assignment will be difficult at first, but the more research articles you read, the easier it becomes.

Report should include the following 4 sections:

1. A complete reference of the article (in APA format).

2. A brief summary of the theory the research is based on, along with a statement of the hypothesis. This should only be a few sentences long.

3. A description of the methodology used, the design of the study, and the variables (how they were measured & manipulated). Identify both conceptual and operational variables.

4. The results, what statistical test did they use, & why (what were they testing). This section may be very difficult as they may be using techniques and tests you have never heard of. But try. Often they will do more than one test, focus on what seems to be the main test.

NOTE: If there are multiple studies reported in one paper, you only need to discuss one of them, should be the key study – the one that most clearly tests the hypothesis.

SAMPLE:

1. Kenrick, D. T. & Gutierres, S. E. (1980). Contrast effects and judgment of physical attractiveness: When beauty becomes a social problem. The Journal of Personality and Social Psychology, 38, 131-140.

2. This study was examining the idea that levels of beauty (how attractive someone is) is not a constant. Rather it is a perception that can be influenced or affected by the situations or surroundings. Specifically, the hypothesis of this research was that an average looking woman would be seen by men as less attractive after the men had been exposed to very attractive models. The idea is that the highly attractive models set up a contrast effect and the average looking people look worse by comparison.

3. Three studies were reported, all were very similar. The first was a field experiment which manipulated whether or not subjects were exposed to a highly attractive model which would set up the contrast. Thus, the presence of the attractive model was the conceptual IV. The way it was manipulated (operationalized) was by looking at subjects who had (& hadn’t) been watching Charlie’s Angles. There were 3 conditions, the most important one was polling a group of men immediately after watching the show. Two other groups (control groups) were men who were polled either before the show came on, or who were not watching TV that night. The idea is that neither of these groups would have been exposed to the attractive TV stars, thus there should be no contrast effect.

Conceptually, the dependent variable was how men would rate the attractiveness of an average looking woman. In this study, this was measured (operationalized) by showing all participants a year-book photo of a woman who had been previously rated as average in attractiveness. The men in the study were asked to rate the photo on a scale of 1-7.

4. The authors used an ANOVA to compare the experimental condition to the other two conditions. They used an ANOVA because there were more than 2 groups they wanted to compare. The average rating of the experimental condition was 3.43, the average rating of the control conditions (combined) was 4.00. This was significantly different F(1,24)=5.03, p<.05.