MyMenu1


Stages of Studying for Comps

By: Clau González on 7/27/2014 at 6:50 PM Categories:
Comprehensive exams are a thing of legend. Every PhD hopeful must endure the process of preparing and must survive the exams themselves. If you have not taken these exams, here is what is in store for you:

Vague Awareness
At the start of your PhD program, you know you will have to face comps one day... but that day is not here yet... it is not even happening this year!

Reality Shock
Somehow, you managed to complete all your coursework and now the faculty have scheduled your comps - in a few weeks!

Panic Mode, Engaged
This happened much too soon... there is so much material to review... did you even learn what you were supposed to?

Despair
Nope. You most certainly did not learn anything. You know nothing. At all. Not a single thing. Time is ticking.

Confusion
You get over the despair and begin to study. And as you review, you realize that you don't even remember half of these things. What is the difference between moderated mediation and mediated moderation? Is that even a thing?

Anger
You have spent so much time studying and it dawns on you: most of the material has nothing to do with your research interests. Why are the faculty even testing you on this? Don't they know you need to focus and publish?

Studying
Once you let go of all the emotions, you get to work. You have to take comps soon!


Panic Mode, 2.0
It is too much. There is so little time. There is no way in the world a single human could possibly learn all of this. You panic. Again.

Determination
You finally realize that every single PhD you know has gone through this. You are not alone. You are capable. You can do this!

More Studying
Yes! You got a second wind and this time YOU KNOW you can handle it!


Exhaustion
After your second wind and new-found determination, you study like never before. Now, you have ben studying for so long... you can't possibly do any more.

Sleeplessness
The exams start tomorrow. And sleep is elusive. You ponder all of the life choices that led you to this moment. You wonder if you are smart enough, and if you have what it takes.

The Exam
The exams begin. Somehow, you manage to type ALL THE WORDS. You write pages and pages and pages. Everything you know - everything you are - is poured into this exam.

Post-Exam
You are done with this exam. You can't even make it home. You are so done.

The Wait
All you have left to do now is wait for your faculty to decide if you pass...

The Results
You somehow manage to pass this hurdle. You should celebrate!! Be happy!


The Next Step
You are now ready for the next stage: the dissertation! Keep pushing!


The day before comps

By: Clau González on 7/22/2014 at 4:58 PM Categories:
Tomorrow, I finally take my comps. FINALLY.

Sadly, I did not get to take today off... I had so much to do before tomorrow.

In an ideal world, I would have one more week to practice more questions and really memorize all the cites. Though I am not sure this will make a big difference.

I began this blog as a way to keep myself motivated and focused during the time I had to study for comps. There have been many ups and downs in this process. If anyone were to look at the number of posts per day, it would be very easy to see which days were the better ones.

Despite any challenges (and there were many), I managed to review and write about every topic for the exam. I also was able to make electronic flashcards and table of articles. Unexpectedly, I even made regular flashcards and practiced writing out all the cites multiple times (an inherently difficult task because of my dyslexia).

The only thing I did not do was to practice more questions. But after one question it was clear my biggest weakness was the memorization of all the citations. And this is how I have tried to spend my days.

All there is left to do is to be confident that my preparation will be evident as I take the test.

As expected, I will be away for the next few days.

¡Si se puede!

... and it continues

By: Clau González on 7/20/2014 at 5:23 PM Categories:

Since 90% of the test relies on memorization, this is all I have done today.

Yesterday I focused on making flashcards for economic foundations topics. Today I practiced over and over and over until I could write out all those cites and their key ideas.

My goal is to complete the sociological foundations flashcards today. Tomorrow, I plan to practice until I know them. I will be making the last set of theory flashcards tomorrow as well.

On Tuesday, I will practice until all I can write out all the theory cites (just under 120) and all the theories (just over 20).

Just a few more days to comps...

Just studying

By: Clau González on 7/19/2014 at 3:13 PM Categories:

All I plan to do from now until the test is to keep reviewing. I am still making flashcards by hand. And I am still memorizing all the information in the tables of articles.

Practice Questions

By: Clau González on 7/17/2014 at 3:17 PM Categories:
The most important part about comps is answering the question. To that end, I will be practicing some questions over the next few days. It is not my intention to write out the answers, but rather identify what is needed in order to answer the question.

Practice question 1
Imagine two US firms in the same industry.  In the last five years, both companies have been trying to expand their presence in China.  While one has been very successful in their efforts, the other has not. Agency Theory, Transaction Cost Theory, and theories of Dynamic Capabilities would likely offer some common and several unique explanations for the differences in these firms’ performance in this market.  Develop the unique explanations and design a study that would effectively confirm or eliminate one or more of these explanations.

In order to answer this question, I need to:
  • Define how each theory offers different explanations for performance
    • Agency Theory
      • Management structure
    • Transaction Costs
      • Vertical Integration, make-or-buy
    • Dynamic Capabilities
      • Routines of the organization
  • Having defined how each theory offers different explanations, I must design a study:
    • Structure the previous explanations into testable hypothesis or propositions
      • The management controls in one company create the right incentives
      • The successful company has integrated key operations in China
      • The successful organization is more efficient
    • Define the variables
      • DVs (outcome - success) 
      • IVs (based on the hypotheses)
      • Controls (age, funding)
    • Discuss sampling and data collection issues
      • Selection of the sample
      • Archival and survey methods
      • Length of the study
    • Discuss what data is necessary to confirm or eliminate each hypothesis
  • Cites
    • Agency: 
      • Jensen and Mekling 1976
      • Fama 1980
      • Demsetz 1983
      • Fama and Jensen
      • Jensen 1986
    • TCE:
      • Coase 1937
      • Williamson
      • Hill 1990
      • Jones and Hill  1988
      • Teece 1982
      • Alchain and Demsetz
    • RVB:
      • Alchain
      • Schumpeter
      • Nelson and Winter 1950
      • Teece
Creating this outline took me about 25 minutes. I did not recall many of the years of the citations. My goal is to create the outline and the cites in 20 minutes. Furthermore, I should be able to detail one or two words about each paper. I also did not include the critics of TCE.

During the exam, I will have two hours to complete this question. In this case, the key ideas have been detailed and it is a matter of making this into paragraph form.

Grad School Wisdom

By: Clau González on 7/17/2014 at 1:52 PM Categories:

"True wisdom begins when we accept things as they are"

I came across this quote while watching cartoons. It was quite unexpected. And yet it was perfect.

With so little time to my comps, I am naturally stressed. Every PhD student I have spoken with describes the comps as one of the lowest and most stressful moments of the journey.

I admit that I am fortunate in that I had 4 weeks to exclusively focus on the exam. I had no other classes, no teaching duties, and very few RA responsibilities.

However, I am not fortunate in that my exam is closed notes and closed book. I do not understand why this has to be the case. Other majors in the school are allowed notes. And I know of several schools that allow notes as well. Furthermore, when writing papers, I will never have to cite something without the opportunity to look it up.

Memorizing the material does not upset me. But I do feel it has detracted from my focus of synthesizing what I have learned. The learning process for each activity is significantly different. We are going to be tested on our ability to synthesize, sure. But we also have to memorize.

I have devoted more than a few hours lamenting this fact. And I have been struggling to balance one set of activities over the other. Naturally, I have prioritized synthesizing. This activity is a lot closer to the ideal of being an academic: reading and writing and thinking and learning and imagining the possibilities of new things.

But with so little time to go, it has become apparent I must simply accept this is what my test will be: it will require memorization. As hard as it is to believe, I have never had to memorize before. I have always taken the time to learn. And I am including all the econometric proofs I had to learn for my methods classes.

I will now stop using my time pondering why my comps are structured this way, and instead focus on creative ways to ensure that all the pieces of information I need in order to talk about these topics intelligently (or as close to that as possible) are stored in my brain by next week.

The End: Putting It All Together

By: Clau González on 7/16/2014 at 4:00 PM Categories:

Now that I have finished making a blog post for each of the major ideas for my exam, it is time to put it all together.

I have previously mentioned how I was working on a mindmap in order to accomplish this. My remaining task is not just synthesizing, it is also memorizing. This represents a challenge since I have never memorized anything as a way to prepare for an exam.

I recently linked this article from Mental Floss on my twitter. In it, the author describes 11 different ways to improve my memory. They are:
  1. Concentrate For Eight Seconds
  2. Don’t Walk Through A Doorway
  3. Make A Fist
  4. Exercise
  5. Sleep
  6. Use Crazy Fonts
  7. Chew Gum
  8. Write Things Out 
  9. Know When To Turn The Music On—And Off
  10. Visualize
  11. Doodle
In an attempt to memorize for the first time ever, I am making flashcards by hand. Making flashcards helps with concentrating for more than eight seconds and writing things out. In addition, it helps me to review. Once I finish a particular theory, I will go to the mindmap to see if I can put things together from memory. This also helps visualize.

This is how I will be spending the rest of the day today. And possibly tomorrow.

The exams are next week...

¡Arriba y adelante!

Endogeniety

By: Clau González on 7/16/2014 at 3:22 PM Categories:
Endogeniety simply means that a parameter or variable is correlated with the error term. There are many reasons why this would happen:

  1. Measurement error
    • This happens when we do not have an accurate measure of the independent variables.
  2. Omitted variables
    • This happens when the model does not include all the variables it should, and thus we have an uncontrolled variable.
  3. Simultaneity
    • This happens when two variables are each affecting the other.

To address endogeniety, there are a few options:

  • Use instrumental variables address omitted variables
  • Heckman correction models address the sampling bias and unobservable variables
  • If the data is not a panel, then propensity score matching could help a small sample
  • Run a 2SLS or 3SLS


(Adapted from course notes)
(Flashcards and other resources here)

Fixed vs. Random Effects

By: Clau González on 7/16/2014 at 3:02 PM Categories:
The best way to think about the difference between random and fixed effects is with this picture.


Fixed effects can be thought of as the relationship between predictor and outcome within an entity. In addition:

  • Assumes something about entity may bias predictor/outcome so need to control for it
  • Removes effects of observed or unobserved time-invariant characteristics from predictor variables
  • It helps with omitted variables bias
  • Creates separate regressions for each entity and averages effects across entities

Random effects, on the other hand, vary across entities

  • Assumes random and uncorrelated with IVs
  • Can include time-invariant variables
  • Assumes entity’s error term is not correlated with predictors which allows time-invariant variables can be explanatory variables

Some examples include:

  • Time-varying observables – age, years of experience
  • Time-invariant observables – degree, gender
  • Time-invariant unobservables – ability, IQ
  • Omitted variables are time invariant

(Adapted from course notes)
(Flashcards and other resources here)

Structural Equation Modeling

By: Clau González on 7/16/2014 at 2:13 PM Categories:
When you think that there are unobserved or latent variables, a potential technique is Structural Equation Modeling (SEM).

Among other advantages, SEM:
  • Can control for random errors
  • Can model measurement error so the model is more precise
  • Can test elaborate models
In general, the SEM starts by looking at all of the relationships in your study. This is known as the perfectly saturated model. All other models are compared to this one. The chi square here should be zero.

From there, it is possible to fix some relationships based on theory. This model is aligning variables to constructs and it is similar to a factor analysis. This model is referred to as the measurement model. If this model is very good, then the chi square will be insignificant. Furthermore, we do not want to have a significant difference between the estimated covariance matrix of the measurement and saturated models. The measurement model is used to asses convergent/discriminant validity. However, this model does not say anything about causality.

It is then possible to specify a causal model using theory. In this case, we also want a low chi square statistic. That would suggest that there is no difference in the estimated covariance matrix from the theoretical model and of the observed.

Last is a further constrained model. This aims to get the most parsimonious model. In this case, some relationships are set to zero.  This model should have a larger chi square (bad news), but it is more parsimonious (good news). The goal is to determine if the change in the chi square between the theoretical model and this parsimonious model is significant. If the change is not significant, we should choose the parsimonious model.

(Adapted from course notes)
(Flashcards and other resources here)

Survival Analysis

By: Clau González on 7/15/2014 at 4:02 PM Categories:
Survival analysis is used when we need to choose a point in time to measure survival, success, failure, death, etc..

Two important concepts relate to the time that we observe the data. This is refered to as censoring and it comes in two varieties: left and right.

Left censoring happens when we do not know how many organizations, people, etc have failed before we began sampling. That means that we only see the people that have survived up to that point, and how long have they survived.

Right censoring happens when we do not know it the people survived and for how long beyond conclusion of the study.

Some techniques to look at survival analysis include:

  • Hazard rate. This is the rate of not survivivngt to the midpoint of a specified time interval
  • Cox Haphazard regression. This examines which IVs influence if failure occurred at a particular time. It is a proportional model, so time is not considered.
  • A generic hazard model does not use proportion. So this model does think about how failure happens as a function of time.

(Adapted from course notes)
(Flashcards and other resources here)

Categorical DVs

By: Clau González on 7/15/2014 at 3:39 PM Categories:
When the DVs are categorical variables, different analyses should be used.

The most common (and the only one discussed in class) is the case of a binary outcome variable. If this is the case either Logit or Probit should be used. Logistic regression estimates the probability of the outcome variable having a certain value (as opposed tot he value itself).

When there are more than two outcome variables, we need multiple logistic regressions solved simultaneously.

Moderators and Mediators

By: Clau González on 7/15/2014 at 3:20 PM Categories:

A moderator is a qualitative or quantitative variable that affects the direction and/or strength of the relation between an IV (or predictor) and a dependent or criterion variable.

Within a correlational analysis framework, a moderator is a third variable that affects the zero-order correlation between two other variable. In the more familiar ANOVA terms, a basic moderator effect can be represented as an interaction between a focal IV and a factor that specifies the appropriate conditions for its operation. In other words, an observed relationship may be different at different levels of a third variable. Moderation refers to the situation where the direction and intensity of an effect of a predictor on a criterion depends on the levels or settings of a third variable.  In essence, moderators attenuate or exacerbate the effect.

To test:
  1. Variables entered into the regression equation in a stepwise and hierarchical fashion.
  2. Control variables (if any were collected) are entered first into the equation.
  3. In order to derive main effects of X on Y, regress X onto Y for this step
  4. Add the interaction terms (X x M) to the analysis.  If there is a change in total variance explained from step 3 to step 4 this suggests total moderational impact, and r2 values for each interaction term shows the impact of the moderator for each relationship.



A variable is said to function as a mediator to the extent that it accounts for the relation between the predictor and the criterion (X and Y). Mediators explain how external physical events take on internal psychological significance. Whereas moderator variables specify when certain effects will hold, mediators speak to how or why such effects occur. 
For a variable to be considered a mediator, it must pass three tests:
  1. X correlates with Y
  2. X correlates with M
  3. M significant impact on Y when X controlled for
    • Effect of X on Y when M controlled for is 0 for full mediation
This is mostly been adapted from Baron and Kenny, 1986

(Adapted from course notes)
(Flashcards and other resources here)

Analysis of Variance and Sundry

By: Clau González on 7/15/2014 at 3:00 PM Categories:
Here I will write very briefly about some analysis.

T-Test
This test compares the means to two groups. The goal is to determine if they are statistically different from one another.

Analysis of Variance
These include the MANOVA, ANOVA and MANCOVA and ANCOVA.

The MANOVA test compares the multivariate means of multiple groups. If this test is significant, then it is possible to do an ANOVA test for each DV.

In particular, the MANOVA:
  • Has more than one DV
  • Uses an omnibus F-test
  • The IV is categorical
  • Identifies significant DVs
  • Assumes normality, linearity
ANOVA:
  • Can be used for each DV after MANOVA F-test is significant.
The MANCOVA and ANCOVA test includes covariates. The covariates are used when you want to control for this variables in the analysis of variance. 

Regressions
The purpose of a regression is to estimate the relationship between the independent (predictor/explanatory) variables and the dependent (response./outcome) variable. Assumptions of regressions include:
  • No specification error (no omitted variables)
  • No measurement error
  • There is no multicollinearity (variables are independent)
  • Errors are independent (no omitted variables)
  • Errors are normally distributed
  • Normality
Hierarchical Regression
Are a type of regression models. This analysis builds successive linear regression models, each time adding more predictors. Here, the order of input matters. Generally, the oder is:
  • Controls
  • Main
  • Interactions and higher order

(Adapted from course notes)
(Flashcards and other resources here)

Factor Analysis

By: Clau González on 7/15/2014 at 2:12 PM Categories:
The first steps to take with new data were already discussed in the previous post.

Factor analysis comes in two varieties: Exploratory and Confirmatory:

Exploratory factor analysis is used when you don't have a clear idea of what items might belong together. There are many ways to see what items belong together. The principal component analysis is most commonly used. Items that hang together must also have face validity.

Confirmatory factor analysis is more complicated. This analysis tests that the hypothesized items belong to particular factors.

(Adapted from course notes)
(Flashcards and other resources here)

Research Methods

By: Clau González on 7/14/2014 at 10:42 AM Categories:
The research methods section will consist of several short posts. In each entry, I will discuss the general ideas behind each technique. I do not intend on detailing how to perform each technique.

The general process for dealing with a dataset is as follows.
  • The first step is to understand the data:
    • Check for missing values 
    • Check for normality
    • Check for outliers
    • Check for errors
  • Depending on the data, the next step is to consider transformations
  • Next is to find the distributions:
    • Look at the variance and correlations
  • It is also important to note if any of the items are reverse-coded
After getting a sense of the dataset and understanding it, the analysis follows. These are some of the techniques that were discussed in class:
  • Covariance Structure: Principal Components and Fator Analysis
    • To reduce the number of variables
  • ANOVA, MANCOVA, Chi Square, Hierarchical Regression Analysis
    • To understand differences among group means
  • Moderators and Mediators
    • To better understand relationships among variables
  • Survival Analysis
    • When the question is about success/failure
  • Structural Equations
    • For elaborate models
  • Panel Data
    • For longitudinal data

Tradeoffs

By: Clau González on 7/13/2014 at 5:28 PM Categories:
The last topic I will discuss in the Research Design section is the tradeoffs. One of the most important activities when designing a research approach is understanding the tradeoffs that we have to make. For instance, issues like sampling (including procedures and size), the precision of measures, and the number of variables are common.

Sampling procedures should be determined to obtain the sample that represents the population that the researcher wants to study. But if you consider availability and cost for samples, the sample my hurt external validity in terms of sample representativeness.
  • Ideal: Ensure that you have access to a large population, from which you can take truly randomized samples. If using a convenience sample, ensure sample relevance by making sure the characteristics of the population meet the boundary conditions of the theory.
  • Tradeoff:  Access is most often determined by relationships, grants, and blind luck, which force the researcher to compromise on the above ideals.
Random sampling procedure will help to make our sample representative of the population that we want to study. Very costly and is sometimes impossible.  Convenient samples are much more easily accessible and are at relatively low cost.  However, using a convenience sample (e.g., undergraduates) may hurt sample representativeness.

Large sample size increases statistical power and reduces the possibility of Type II errors. But overpowered samples can be wasteful in terms of unnecessary effort, time, and resources spent, and may be oversensitive to trivial or irrelevant "significant" findings (Mone, et al., 1996)
  • Ideal:  As big of a sample size as possible, to limit type II error, and to ensure power.  
  • Tradeoff:  Overpowered studies, will waste time and resources by using overly large samples.
Large number of measures increases the information about reliability of validity (increases construct validity), but it may decrease a participant’s motivation, and may increase the participant’s awareness of the test/manipulation. Implementing a small number of measures saves time and costs but will provide little information about reliability and validity.
  • Ideal: One should typically strive for using multiple measures to increase the validity of findings.
  • Tradeoff: May lead to the measurement of a different though possibly related construct. 
Trying to increase the precision of measures reduces error variance, but then may become meaningless measures for our research. Trying to increase the meaningfulness of measures tends to result in less precise measures.  In short, low generalizability → low error variance, while high generalizability → high error variance.
  • Ideal: To improve measurement, researchers will often try to develop a new instrument.
  • Tradeoff:  A huge investment in time and effort and no normative data based.
Increase in the strength of measures enables us to detect significant effect on the dependent variables, but it makes reactive treatment and thus the problems of demand characteristics and evaluation apprehension will occur. Decrease the strength of measures decrease the participants' awareness of the manipulation and thus increase external validity, but it may cause range restriction, small effect size, and Type II error will increases.

Strength of manipulation: Strengthening the manipulation increases effect size and enables the researcher to detect the significant effect of the manipulation. But that increase can lead to the participant’s awareness of treatment and can cause apprehension evaluation.  : In field settings, the range of independent variables that the researcher want to manipulate can be large, and the variance of dependent variable will be large. However, it is difficult to attribute the large difference in dependent variables to the manipulation because of a lot of noise. On the other hand, in laboratory settings, the researcher can manipulate variables more precisely than filed settings. But the range and strength of the variables tend to be small, and the effects to the dependent variable will also be small.
  • Ideal: You want to maximize systematic (experimental) variance.  Design, plan and conduct research so that the experimental conditions are as different as possible.  Similarly you want to control extraneous systematic variance, and minimize error variance.
  • Tradeoff:  When everything is constant, you can always show an effect. Lose generalizability.

A large number of variables can make a model more comprehensive, but it will also increase complexity and the difficulty of analysis.  It becomes difficult to make causal inferences both logically and statistically. Highly sophisticated statistical procedures must be used when there are many independent/dependent variables.  Also, effects of some independent variables may be small.  A small number of variables will enable the researcher to focus on each specific variable more and in turn will make the model easier to analyze. Tradeoff is between external (more variables) and internal (fewer variables) validity.
  • Ideal: Each dependent variable is the presumed effect of one or more independent variables as an antecedent.  When operationally defined they are observable and measurable.
  • Tradeoff:  Variables are difficult to define and measure. Problems of generalizability related to paper-people studies.  Problems also encountered with self-report measures, due to differences between behavior intention and actual behavior
Conclusion
There is no perfect research. The choice of design depends on what type of information the researcher wants. Therefore, it is imperative to ask the right questions ahead of time. This will lead to a more effective design.  Also, it is often preferable to use more than one design if possible (triangulation), as this will give us more useful information.


(Adapted from group and course notes)
(Flashcards and other resources here)

Method Variance

By: Clau González on 7/13/2014 at 4:51 PM Categories:
I will briefly discuss method variance, as I have mentioned it as a potential problem in previous posts. It is generally understood that research focusing on macro-level questions (such as strategy) rarely use laboratory experiments, and rarely have the problem of common method variance.

However, each time a research question emerges and the research design is created, it is important to remember and know the whole set of approaches available.


Method variance refers to variance that is attributable to the measurement method rather than to the construct of interest (Podsakoff, Mackensie, Lee, and Podsakoff 2003).

Method variance represents a major problem in research design since it can have a substantial impact on the observed relationships between predictor and criterion variables in organizational and behavioral research. Method variance:

  • Is one of the main sources of measurement error.
  • Threatens the validity of the conclusions about the relationships between measures 
  • Provides an alternative explanation for the observed relationships between measures of different constructs that is independent of the one hypothesized
  • Yields potentially misleading conclusions

In order to address this problem there are procedure and statistical remedies.

Procedure remedies

  • Obtain measures of the predictor and criterion variables from different sources.
  • Temporal, proximal, psychological, or methodological separation of measurement
  • Protecting respondent anonymity and reducing evaluation apprehension.
  • Counterbalancing question order.

Statistical Remedies

  • Herman’s single-factor test
  • Partial correlation procedures designed to control for method biases.
  • Controlling for the effects of a directly measured latent methods factor 
  • Controlling for the effects of a single unmeasured latent method factor 
  • Use of multiple-method factors to control method variance
  • Correlated uniqueness model 


(Adapted from group and course notes)
(Flashcards and other resources here)

Context or Setting

By: Clau González on 7/13/2014 at 4:42 PM Categories:
This discussion is an extension on the Lab, Field, and Survey post.

The place where we conduct research on organizations can have a significant impact not only on how we conduct research but also on the results of our studies and investigations. This means paying attention to methodological fit, and how we match the type of research question we wish to explore with the type of setting to use for it.

The first step to consider, is what the current state of theory is at.  Next, we will select a specific setting, which will have implications for the amount of control we have over extraneous variables.

  • Nascent Theory Research
    • Here,  little or no previous theory exists. Researchers do not know what issues might emerge from the data so they avoid hypothesizing specific relationships between variables.
  • Intermediate Theory Research
    • This draws from prior work to propose new constructs and/or provisional theoretical relationships.  Careful analysis of both quantitative and qualitative data increase confidence that the explanations are more plausible than alternatives.
  • Mature Theory Research
    • Encompasses precise models, supported by extensive research on a set of related questions in varied settings.  Research is elegant, complex and logically rigorous.  Questions tend to focus on elaborating, clarifying or challenging specific aspects of existing theories.

Research performed in organizations (i.e., in the field) differs from research performed in a lab. Some experiments can only be performed in the field, while others will benefit greatly from being studied in a lab where distractions are minimal and there is little chance of subjects discussing the experiment.

As Bouchard points out in his 1976 paper, field settings differ from lab settings in 3 major ways:

  • Boundary factors
    • Intensity, e.g. firings, layoffs, demotions, transfers in the workplace, etc.
    • Range, e.g. group size, span of control, time span, even physical spaces
    • Duration and frequency – ability to cross response system threshold. 
  • Structural factors
    • Natural time constants – only field studies allow us to study natural temporal structures
    • Natural units – there is “intrinsic order” in every field environment that we can study
    • Complexity 
  • Factors that broaden type of questions asked (not unique to the field)
    • Setting effects – field settings are very complex with a large number of forces at work in any given situation; they are dynamic and a relationship found at time T1 may not be found at time T2. 
    • Representativeness – while lab studies are fairly strong on representativeness of subjects, they are weak on representativeness of treatments. 

The instruments we use for conducting research in organizations will depend not only on the object of our study, but also on what is feasible within an organization. Some organizations may be completely open to an experimental setting, whereas others may only be willing to provide retrospective documentation and yet others will only be willing to administer surveys and questionnaires without participating in anything else.

  • Qualitative/Survey Research
    • Tools include Interviews and Participant Observation, as well as various Unobtrusive Measures (archival data, physical evidence).
  • Field Experiments
    • You can achieve better analysis using selection bias models and propensity score analysis. Useful also is the Ecological Momentary Assessment (EMA), which captures momentary behaviors and tracks them over time.
  • Laboratory Experiment
    • Different types of tools could include Standard Laboratory Experiments, Free Simulations and Experimental Simulations. 

Depending on what is available within an organization, the budget, and project objectives researchers will decide what instrument is better to use. Bouchard specifies 5 main methods to use in a field setting:

  1. Interview – there are different types of interviews and the type we use will depend on our research objectives as well as the organization. 
  2. Questionnaires – it’s a good idea to involve some of the respondents in the construction of the questionnaire (influential and most competent is a huge plus). 
  3. Participant observation – based on theory that an interpretation of an event may be correct only when it is a composite of 2 points of view, inside and outside. 
  4. Systematic observation – includes self-observation (self-reports, diaries, checklists), analysis of verbal material.
  5. Unobtrusive measures, i.e. archives, physical traces, simple observation, etc. 

Overall, field research is considered the most realistic and “is where the generality, applicability, and utility of psychological knowledge are put to the test” (Bouchard 1976). Field research can provide invaluable insights that lab studies will not be able to cover. However, we need to remember that field research is often more difficult, ambiguous, and costly and may not be generalizable.  Methodological choices can enhance or diminish the ability to address particular research questions.  The appropriate choices will result in fit through the logical pairing between methods and the state of theory development when a study is conducted.

(Adapted from group and course notes)
(Flashcards and other resources here)

Qualitative Techniques

By: Clau González on 7/13/2014 at 4:23 PM Categories:
Qualitative techniques for data collection are suited for the exploratory studies when little is know about the phenomena. That is, these techniques are well suited for the early stage of the five-step logical path for the programmatic research (McGrath, 1964).  However, these techniques are not suited for the testing theories or making causal inferences, as they do not allow for manipulation or control of variables. As Lee et al (1999) suggest, qualitative techniques are well suited for the purpose of description, interpretation, and explanation but are not suited for issues of prevalence, generalizability, and calibration.

Some qualitative techniques for gathering data were discussed in the Lee, Mitchell, and Sablynski 1999 paper:
  1. Observation:
    • Relatively passive and nonintrusive, mainly to acquaint the researcher with the site and its members. Used early in a qualitative study. Two variants of this technique: take an organizational training course or to function as an actual employee.
    • Direct, Systematic Behavioral Observation
      • Use this method when the research question deals with overt behavior
      • Involve explicit, systematic procedures for observing, recording, and categorizing behavior
      • Choice of behavioral units to record is critical
      • Considerable training required to achieve reliability in coding behavior
      • Can either be a “fly on the wall” or become an active participant
  2. Access archival records 
    • Also passive and nonintrusive. Although “archival records” may not be a study’s main source of data, they can effectively confirm, supplement, or elaborate upon one’s more primary information. Potential problems include reactivity of measures, reliability of data, and construct validity of measures such as documents and records.
  3. Interviews
    • More active and intrusive, used most frequently. Vary in duration, formality, number of people interviewed at one time, and how data are recorded. Researchers should decide ahead of time re duration, formality, number of people interviewed at one time, and how data are recorded AND be able to explain these decisions during peer review process.  One potential con: the observed and/or the examined people are almost always aware that they are being monitored. 
  4. Questionnaires 
    • More active and intrusive. Because questionnaires reduce spontaneity, inhibit free-flowing speech, and constrain ones’ manner, they provide more supplemental (rather than primary) data, similar to archival records. Nevertheless, questionnaires can “orient” the respondent and get everyone “on the same page.”
If interview and questionnaires are employed, Sackett & Larson discussed issues related to measures:
  • Rely on self-reports or reports about another person or entity
  • Interviews and questionnaires are conceptually similar; choice between is based on situation-specific, pragmatic considerations
  • Can be used as a substitute for direct observation (demographics), as a tool to assess internal states such as attitudes (because these are not directly observable, or as a measure of perceptions.
Qualitative techniques offer the following strengths:
  • Frees researchers from geographic concerns
  • Saves time and money in comparison to observation
However, it also has the following weaknesses
  • Ambiguities about direction of causality
  • Problems with common method variance
Qualitative techniques are not popular. Pratt 2008 and Hannah & Lautsch 2011 addressed why scholars do not employ qualitative techniques:
  • Uncertainty about how to conduct good qualitative research, even among qualitative researchers
  • Lack of consensus in evaluating qualitative research, which makes it hard to publish: 
    • Overly high standards for qualitative research (higher than for quantitative research) → harder to get accepted for publication. Requirements re theory development are much higher compared to quant papers. 
    • Inappropriate standards, i.e., quantitative standards are inappropriately applied to qualitative research. 

(Adapted from group and course notes)
(Flashcards and other resources here)

Lab, Field, and Survey

By: Clau González on 7/13/2014 at 3:50 PM Categories:
In this post I will summarize lab, field, and surveys. The most important things to remember are the measures that are used and the inferences each approach allows us to make.

Lab Experiment
  • Requires IVs and DVs, pre and post testing, experimental and control groups.
  • Does not recreate reality, but studies variables in highly controlled, generic/created situation with stimulus/manipulation controlled by the experimenter.
Field Experiment:
  • Similar to the lab experiment, it manipulates something to see effect on something else. Less control.
Survey
  • Doesn’t include a manipulation and the goal is to leave setting and participants just as they were found. 
  • Looks at natural variance instead of stimulating variance.
The measures used for each approach are:
  • Field and Lab Experiments
    • Dependent variable is real behavior.
    • Uses ANOVA, t-tests, etc.
    • Looks at mean differences.
  • Surveys
    • Dependent variable is measured through self-reports.  
    • Correlational and often measured at same time.  
    • Common method variance issues.
The key aspect to keep in mind when designing a study is understanding how each approach allows us to make different inferences. Ideally, there will be triangulation - using multiple approaches.
  • Lab and Field Experiments
    • Causal: treatment X caused outcome Y
    • Able to do this because of issue of control (rule out extraneous influences on Y). 
    • Field allows for time.
  • Surveys:
    • Correlational
    • Non-causal
    • Associational inferences
    • X varies with Y
It is good to keep in mind the pros and cons for each approach.

Laboratory
  • PROS
    • Random assignment
    • Control
    • Precision
    • Causal inferences
    • Can plan
  • CONS
    • Ethical limitations
    • Generalizability
    • Short-lived (brevity)
    • \Weaker manipulations than “real-life” counterparts
    • May miss critical boundary conditions
    • Artificial
    • Demand characteristics
    • Evaluation apprehension
    • Experimenter expectancy
Field Experiments
  • PROS
    • Random assignment (possibly)
    • Manipulation in real setting
    • Allows control for causal relationships
    • May encourage application of results
    • Longer time frame
    • Context is meaningful to participants
    • Subjects less likely to be aware in experiment
  • CONS
    • Hard to do
    • Hard to get true control groups
    • Hard to control outside influences/confounds (therefore, stimulus less impactful)
    • Difficult to control independent variable (therefore, hard to draw “cause-effect” relationships)
    • Cost/time
    • Ethical considerations in selecting control group
    • Demand characteristics
    • Evaluation apprehension
    • Experimenter expectancy
Surveys
  • PROS
    • Natural setting (perhaps more ext. validity)
    • More realistic (more believable)
    • Describes the population
  • CONS
    • Hard to control
    • May not be as generalizable as we think
    • Difficult to replicate
    • Not causal
    • More bias (therefore, less reliable)
    • Cross-sectional (measure everything at once—may lead to common method variance)

(Adapted from group and course notes)
(Flashcards and other resources here)

Research Design and Time

By: Clau González on 7/13/2014 at 2:27 PM Categories:
In the previous post, I detailed a few different types of research designs. In this post I will talk about an important element in research design: Time.

According to Mitchell and James, time is treated as a commodity that can be broken into meaningful segments or blocks. It flows evenly and consistently, it’s precise and quantifiable, and is ordinal.

Time is an important variable to consider in theory because theory concerns about causal relationship between x and y and time is a marker. In relationship between X and Y, there are time lag, duration, rate of change, reciprocal causality and non-linear cyclical and oscillating issues. Thus, time in theory is significant. By the same token, time is important when designing a research study because the measurement of when x and y occur and when they are measured would make significant differences in the result. If we ignore time in the theory-building phase, it would make bad theory. If we ignore time in the designing phase, it may lead to bad design and misleading results.

Five major ways in which theory informs method with respect to time:
  1. First, we need to know the time lag between X and Y. How long after X occurs does Y occur? 
  2. Second, X and Y have durations. Not all variables occur instantaneously. 
  3. Third, X and Y may change over time. We need to know the rate of change. 
  4. Fourth, in some cases we have dynamic relationships in which X and Y both change. The rate of change for both variables should be known, as well as how the X,Y relationship changes.
  5. Finally, in some cases we have reciprocal causation: X causes Y and Y causes X. This situation requires an understanding of two sets of lags, durations, and possibly rates.
There are several ways to consider time into a research. First, we can think of the time of the measurement. If we know the time relation between variables of interest, we can consider time easily. However, we do not know it most of the time because theory rarely specifies time issues. By posing some questions, we can address this issue. When should I start measure X (start time issue)? How stable X is when measuring (stable issue)? When does Y appear after X (lag issue)?

Second, we can address time issue in our study considering the frequency of measurement. Y may change over time systematically, but not particularly complex. To assess rate of change we need multiple assessments through theoretical consideration of intra individual change, inter individual change and contextual change.

Lastly, we want to examine the issue of measure stability. Changes in the assessment of a variable over time can be due to random error, systematic error or systematic change. For this issue, test-retest assessment can provide information on the stability of X and Y and time-series (numerous observations on a small number of subjects) designs assess sources of error over time.

Some ways to think about time include:
  • Construct Mitchell’s Moderation by Causal Cycle curve: illustrated periods of time in which X,Y interact
    • Equilibration period: period it takes X to affect Y and for Y to reach state of constancy of state of equilibration
    • Equilibrium period: when Y reaches state of equilibration, Y is said to enter an equilibrium type condition. The scores of Y may continue to change in this period, but the changes are small and constancy is resumed fairly quickly. This is when Y should be measured
    • Entropic period: changes in Y are completely uncertain with respect to given set of measurements on Y. This is final state of causal cycle. This is when you need to stop measuring Y. 
  • Draw diagram for relationships between X,Y in order to get an idea of issues such as lags, change and reciprocal causality
  • Consider amount or rate of change. How much does X effect Y? Is this rate constant? If not, do you have to measure Y multiple times?
  • Specify time in your research design before you begin
  • The Frequency of Measurement--As Kelly and McGrath (1988) point out, we need at least three assessments to look at a curvilinear relationship; four for oscillation; and perhaps more for rhythms, spirals, and cycles.
  • Stability (reliability)--Test-retest assessments (if variables are assessed during a steady-state or equilibrium period) can provide information on the stability (reliability) of X and Y, and time-series designs, as we mentioned earlier, also assess sources of error over time.

(Adapted from group and course notes)
(Flashcards and other resources here)

Research Designs

By: Clau González on 7/13/2014 at 12:09 PM Categories:
In the Causality post, I discussed the Solomon four group design. This design addressed some concerns on internal and external validity.

In this post, I will discuss other research designs, their advantages and disadvantages.

Nonequivalent Control Group Design
Advantages

  • Controls for the main effects of history, maturation, testing, instrumentation, selection and maturation.

Disadvantages

  • We do not test for the interaction between selection and maturation.  
  • The problem of regression.

Counterbalanced Design
Advantages
  • This design has internal validity on all the individual points.
Disadvantages
  • There are systematic selection factors involved in the natural assemblage of the groups, as well as the effects associated with specific sequences of treatments.
Multiple Time-Series
Advantages
  • An extended version of the Time Series design and the Nonequivalent Control Group design, this model controls for all internal sources of validity.  
Disadvantages
  • The only disadvantage is that we are not controlling for potential interaction of testing and selection with X.
Time Series
Advantages
  • Maturation is well controlled for (unless it is very abrupt), instrumentation is also well controlled for assuming we keep the same measures.  
  • Regression, selection and mortality effects may also be ruled out.
Disadvantages
  • Failure to control history is the biggest weakness.

(Adapted from group and course notes)
(Flashcards and other resources here)

Causality

By: Clau González on 7/13/2014 at 11:54 AM Categories:
A crucial part of research design is the ability to establish causality. In order to make a statement about causality, three conditions are necessary:
  • the cause precedes the effect,
  • the cause and effect covary, and
  • there is no plausible alternative explanation for the covariation.
In order to make these inferences, Campbell and Stanley (1963) discuss that the Solomon four-group design is best suited to make statements of causality.

The true design 4 (pretest-posttest control design) can maintain internal validity. History, maturation, and testing are controlled. Regression is controlled in so far as mean differences are concerned. Selection is ruled out through randomization. It can tell whether mortality and instrumentation offer plausible rival hypotheses. The Solomon four-group design not only increases external validity by controlling the main effect of testing and interaction of testing and X, but it also replicates the effect of X. Among the pre-experimental designs, the one shot design is least suited for these inferences. This design totally lacks control and is of no scientific value.  The process of comparison is required.

Correlation does not indicate causality. Correlation simply implies that the mean difference between groups show that the two groups are related on some attributes.

  • Correlational data helps to disconfirm existence of causal relationships.
  • Weak correlationional design: Consider the case in which two units of analysis (two groups for comparison) are being observed outside the laboratory setting. One group gets treated with X, the other doesn’t. However, these two groups differ on many other attributes other than the presence/absence of X. Each of these other attributes could create difference in Os and each therefore provides a plausible rival hypothesis that X had an effect.
  • Stronger correlational design that could point to more causal relationships: You still have groups for comparison, but X is a naturally occurring event that varies with one group more so than it does with the others. The key is that X is not artificially implanted, but is naturally part of one group and not a part of the other. Example: heavy smoking and lung cancer.

A correlational study must meet the first, three of the kinds of validity (internal validity, statistical conclusion validity, and construct validity) (Mitchell, 1985). Then, based on Popper (1959, 1963)'s falsification orientation, causal hypotheses with correlation zero will be disconfirmed, otherwise the hypotheses survive and will be examined further by cross-validation or other methodologies.

Furthermore, in a correlational study, Campbell & Stanley (1963) suggest that if the correlation is zero, the hypothesis on causality can be disconfirmed, otherwise the hypothesis survives and the researcher can examine it further through the use of other settings.

The causal interpretation of a simple or a partial correlation depends upon both the presence of a compatible plausible causal hypothesis and the absence of plausible rival hypotheses to explain the correlation upon other grounds.  Any third variable that could affect the signaling frequency of both pairs of drivers in a similar fashion becomes a plausible rival hypothesis.

(Adapted from group and course notes)
(Flashcards and other resources here)

Internal and External Validity

By: Clau González on 7/13/2014 at 11:50 AM Categories:
In the previous post, I discussed the four types of validity.

Here, I will discuss validity in terms of research design. As we create and design the way we are going to answer our research question, there are threats to internal validity and external validity that must be considered.

Internal validity seeks to isolate the cause and effect relationship. In other words, it is focused on making sure that the manipulation (independent variable) has some effect on the dependent variable. This sheds light on the true relationship between the variables. The primary threats to internal validity are:

  • History 
  • Maturation
  • Testing
  • Instrumentation
  • Statistical regression
  • Selection
  • Experimental mortality
  • Selection-maturation interaction

On the other hand, external validity is focused on the ability to generalize beyond the original study across times, settings, measurements, and persons. Threats include:

  • Reactive or interaction effect of testing.
  • Interaction effects of selection biases and the experimental variable.
  • Reactive effects of experimental arrangements.
  • Multiple-treatment interference.

I will discuss different designs in a future post (laboratory, survey, field). But it is relevant to note how some of these research designs might have internal/external validity problems. 

In laboratory studies, for example, it is possible to isolate the variables of interest to determine if there is a relationship between the independent and dependent variables. This means it is high on internal validity. However, because the laboratory is a highly controlled environment, it is difficult to generalize to conditions outside the lab. So, in a lab study it is important to create a balance between internal and external validity. The greater internal validity, the more controlled the lab experiment is, and so the more difficult it is to generalize.

Field studies are the opposite. When introducing a manipulation, it is difficult to show that it was the manipulation and not something else that had an effect on the dependent variable. The external validity in a field study has fewer challenges to overcome than a lab study. It is possible, with careful research design, to generalize to settings beyond the to the original study. 

Depending on the goal, internal or external validity may be more important. If the main goal is only to determine the true relationship between the independent and dependent variables, external validity is not necessarily important. If the main goal is to apply the findings to the world beyond the lab then external validity is important. Ultimately, research goals are interested in both, and thus a balance between internal and external validity must be found. It is possible to have a research design that uses a laboratory study to determine the relationship of two variables, and follow up with a field study to test it. 

(Adapted from group and course notes)
(Flashcards and other resources here)

Hypothesis Testing

By: Clau González on 7/11/2014 at 10:43 AM Categories:
There are a few standard steps in hypothesis testing:
  1. State the hypothesis in general terms
    • Exploratory approaches.
    • Intentional search approaches.
    • Extending-coupling approaches.
  2. Operationalize the hypothesis:
    • What will be measures/observed? (Dependent variables) 
    • What will be manipulated?  (Independent variables) 
    • How are these tied to the hypothesis? 
  3. What methods will be employed?
    • How will you test your hypothesis? 
    • How will the relation between your independent and dependent variables be examined?
  4. What results do you anticipate?
    • How will the results provide evidence for your hypothesis
It is also important to note that a hypothesis is never really proved or disproved. You provide evidence either in favor of the hypothesis or evidence that casts doubt on the hypothesis. Thus, empirical evidence never proves the hypothesis but rather “establishes” the hypothesis for acceptance. As such, the hypothesis is never proven as a logical consequence of empirical evidence. A core tenet of the scientific process and theory building is falsifiability.

Furthermore, statistical tests aid in making inference from the observed sample to unobserved population. That is, they let you evaluate the population indirectly.  One can only go from the denial of S (sample) to the denial of P (population); and not from the assertion of S to the assertion of P. Statistical tests also allow making statements of reliability of the measure.

(Adapted from group and course notes)
(Flashcards and other resources here)

Validity

By: Clau González on 7/10/2014 at 3:03 PM Categories:
The most common definition of validity is typified by the question: are we measuring what we think we are measuring. There are four types of validity: face, content, criterion, and construct.

First is construct validity. It focuses on measuring concepts that are not directly observable and that we try to infer. For instance intelligence, anxiety or attitude. Rather than focusing on the tests themselves, the focus is on the meaning of the tests and its factors. Construct validity, then, is preoccupied with not just validating the test, but the theory, theoretical constructs, and scientific empirical inquiry (1). In addition, both convergence and discriminability are required.

  • Convergence means that the evidence gathered all indicates the same thing (1). 
  • Discriminability means that it is possible to point out which measurements are not related to the construct. 
  • This is the most important form of validity because it connects measures with theories. 

Criterion validity compares tests scores with one or more external variables or criteria that is known to measure the attribute of interest. There are two types of criterion validity: predictive and concurrent.

  • Predictive validity focuses on future performance. 
  • Concurrent is at the same time. 
  • The information criterion validity provides is useful for new tests. However, its biggest challenge is the criterion used in the comparison.
  • The information that these next two measures provide is mostly obvious. They require judgment and appearance. 

Content validity focuses on the question: is the content of this measure representative of the universe of content of the property being measured.  A test with high content validity would, in theory, be a representative sample of the universe of content. This type of validity relies on judgment.  Some universes of content are easier to judge than others. For the obvious ones, content validity is assumed. For instance, a test of arithmetic to determine weather students can do additions.

Last is face validity, which focuses on looking to see if the measure obviously and clearly measures the intended measure. Measures are difficult to justify using face validity. However, it is the most basic and without it the rest of the validity methods would not work. For example, if we intent to measure a person’s ability to run, but we measure the number of pillows they own, face validity would indicate that this may not be an obviously valid measure.

(Adapted from group and course notes)
(Flashcards and other resources here)

Reliability

By: Clau González on 7/10/2014 at 2:55 PM Categories:
Reliability is the consistency or stability of a measuring instrument.  Kerlinger defined it as "the proportion of the ‘true’ variance to the total obtained variance of the data yielded by a measuring instrument.”  Operationally, this translates to the proportion of error variance to the total variance yielded by a measuring instrument subtracted from 1.00 (the index of 1.00 indicating perfect reliability).

Schwab defined reliability as referring to the ratio of “true” to total variance in a set of parallel measurements obtained on an individual. While Mitchell discussed reliability as the measure of correlation between maximally similar items and assesses the random or choice error.

It is also useful to think of reliability as:
VO = VT + Ve   (Variance Observed = True Variance + error)

The four methods for assessing reliability are:

  1. Kuder-Richarson-21 (KR-21) is considered the most conservative estimate for reliability of an instrument with binary scoring or where the response scale is dichotomous (e.g., true or false).  The Kuder-Richardson formulas are special cases of Cronbach’s coefficient alpha.  Cronbach’s alpha is the most frequently used metric because it is a) conservative, b) easier and less time-consuming than test-retest or parallel forms, and c) does not involve cutting out data as in the split-half test. 
  2. Split-half: This involves splitting the items into two halves, with the goal of obtaining two equal or equivalent halves.  Each person will have two half-scores; the responses of both halves are compared (possibly using a Pearson correlation) to ensure internal consistency.  This is fairly conservative because it will underestimate the true reliability, since it is only the correlation of two halves of the test.  Splitting the sample may be difficult, and the assessment of reliability now depends upon fewer items.
  3. Parallel forms: This involves creating two measures that are equivalent, but not identical.  This can be very time-consuming for the researcher.  Each person would be subjected to measurements by both instruments.  The score of these two measures is then compared for consistency.  This measure is less conservative because there is the chance for fatigue or boredom when respondents must complete two measurement indices.  (Test A and Test B).
  4. Test-retest: Used to measure the stability of a measure over time.  This involves administering the same measurement instrument to the same group of people on two different occasions.   This measure is less conservative when there is high attrition if the organism being measured goes through dramatic developmental changes from time 1 to time 2.  Not a good way of computing the reliability coefficient if attrition is high or the organisms being measured will be going through a dramatic developmental change
(Adapted from group and course notes)
(Flashcards and other resources here)

Measurement

By: Clau González on 7/09/2014 at 4:39 PM Categories:
Kerlinger and others have discussed measurement bias and measurement development.

Measurement biases involve systematic error that can occur in collecting relevant data. Common measurement biases include:

  • Instrument bias. Instrument bias occurs when calibration errors lead to inaccurate measurements being recorded, e.g., an unbalanced weight scale. (questionnaires, company records)
  • Insensitive measure bias. Insensitive measure bias occurs when the measurement tool(s) used are not sensitive enough to detect what might be important differences in the variable of interest. (questionnaires)
  • Expectation bias. Expectation bias occurs in the absence of masking or blinding, when observers may err in measuring data toward the expected outcome. This bias usually favors the treatment group. (behavioral observations)
  • Recall or memory bias. Recall or memory bias can be a problem if outcomes being measured require that subjects recall past events. Often a person recalls positive events more than negative ones. Alternatively, certain subjects may be questioned more vigorously than others, thereby improving their recollections. (questionnaires)
  • Attention bias. Attention bias occurs because people who are part of a study are usually aware of their involvement, and as a result of the attention received may give more favorable responses or perform better than people who are unaware of the study’s intent. (behavioral observations, questionnaires)
  • Verification or work-up bias. Verification or work-up bias is associated mainly with test validation studies. In these cases, if the sample used to assess a measurement tool (e.g., diagnostic test) is restricted only to who have the condition of factor being measured, the sensitivity of the measure can be overestimated.

To develop a new measure in a field, the following process is suggested:
  1. Specify domain of construct
    • Extensive literature review to define the exact construct I want to measure or evaluate.  
    • Measuring this construct would involve developing a scale to generate a degree of presence or absence of that construct or items making up that construct. 
    • In the instance of survey questions the measure may be only positive numbers (unipolar) with different degrees of the same attribute in mind, or positive and negative numbers on the scale, which conveys more bipolar dimensions.
  2. Empirically determine the extent to which items measure that domain
    • The levels of the scales need to be appropriate of so that differences between measures be interpretable as quantitative differences in the property measured. 
    • Questions would be closed response to reduce variability in the responses and reduce ambiguity. From the initial sampling of respondents, the classification of respondents into categories and employing a factor analysis to verify construct validity. 
    • Each item on the questionnaire needs to address a single issue and measure. 
    • Each of the questions would then be evaluated for correlation to make sure that multiple questions were not measuring the same construct to simplify the measure.  
    • Construct Validity would then be verified with other measures of the same construct to be similar (Convergent Validity) and Correlations between measures of the same construct (discriminant validity).
  3. Examine the extent to which the measure produces results that are predictable from the theoretical hypotheses.
    • Half of this initial sample would be utilized for exploratory factor analysis and after the measures and groupings were defined, the other half of the sample would be compared and measured with Confirmatory Factor Analysis (Cronbach’s Alpha) to verify the constructs we were measuring fit the data appropriately. 
    • This replication of the data will help assess the consistency, reliability, and validity of the measure.
(Adapted from group and course notes)
(Flashcards and other resources here)

Theory Evaluation

By: Clau González on 7/09/2014 at 4:09 PM Categories:
The cycle of theory building approach as discussed in the previous post allow us to see the big picture of theory building process. This way of theory building allows us to integrate the dichotomies in business academia between field-based research and large-sample data analysis; theoretical vs. applied research; between deductive and inductive theory building. Hence, it leads us to develop a good research questions.

There are a few ways to evaluate theory.

  • Falsifiability: must be possible to refute empirically.
  • Utility: refers to the usefulness of theoretical systems.
    • Explain: means establishing meaning of constructs, variables, and linkages
    • Prediction: means testing meaning by comparing it to empirical evidence and theory provides mechanism for predicting beyond chance

Corley and Gioia (2011) discuss theory contribution in terms of different quadrants:
  • Incremental Insight – Is the contribution “significant”? Do we get closer to the truth?
  • Revelatory Insight* – Does it surprise us?
  • Scientific Utility* – Does it improve the conceptual rigor or assist in forming testable predictions?
  • Practical Utility – Can the theory be applied to real-world problems? 
Note (*): Papers  further NE in the diagram to the left are considered as providing greater contribution than papers to their SW.

(Adapted from group and course notes)
(Flashcards and other resources here)

Theory Building

By: Clau González on 7/09/2014 at 3:59 PM Categories:
There are different ways in which one can build a theory. However, a few things must be kept in mind:
  1. Completeness
    • Are all relevant factors included?
  2. Parsimony
    • Should some factors be deleted because they provide little value?
  3. Relationships
    • Why are you selecting the various factors and why do they relate? 
    • Gets at assumptions of author and why people should care (underlying economic, social dynamics, etc.).
Christensen (2001) suggested a synthesized model of theory building throughout a range of fields. The model consists of four stages that repeat. This cycle of theory building includes both deductive and inductive modes.

  1. Observing phenomena, and carefully describing and recording those observations
  2. Classifying the phenomena into categories of similar things: The aim is to simplify and organize the world in ways that highlight the most meaningful differences amongst phenomena
  3. Building theories that explain the behavior of the phenomena (describing what causes what, why, and under what circumstances)
  4. Using theory to predict what they will observe when they go out and observe more phenomena under various conditions for more accurate description, revising a classification scheme and/or articulating a new statement of what causes what under what circumstances
  5. Once researchers have defined a set of categories that are collectively exhaustive and mutually exclusive, then the theory built becomes a paradigm. 
Christensen also talks about the discovery of anomalous phenomena is the pivotal element in the process of building improved theory because the anomaly observation creates reliable mechanism in classification. Furthermore, the anomalies may lead to a toppling of a reigning paradigm. 

(Adapted from group and course notes)
(Flashcards and other resources here)

Theory

By: Clau González on 7/08/2014 at 3:37 PM Categories:
According to Babbie, theory is:
  • A systematic sets of interrelated statements intended to explain some aspect of social life (they attempt to explain what we see). 
  • It is tied to observable events and makes predictions about empirical findings. 
  • And it is a series of structures linking constructs to action, eventually linked to behavior. 
One way to think about theory is to think about modeling.

Whetten (1989) suggests the Modeling-as-theorizing methodology for theory development that uses graphical modeling logic conventions. He suggests that “A theory is a collection of assertions, both verbal and symbolic that identifies WHAT variables are important for what reasons, specifies HOW they are interrelated and WHY, and identifies the CONDITIONS under which they should be related or not related.”

Here is some more explanations about “WHAT, HOW, WHY, and the CONDITIONS.”
  1. ‘Whats’-as-constructs: What are the elements of my conceptualization?
    • Brainstorming constructs using post-it notes, PIN. 
    • Assessing complementarity or compatibility of the constructs by considering the scope of the concepts and the coherence of the constructs.
  2. ‘Hows’-as-relationships: the specification of relationships between constructs is the key difference between a theory and a list:
    • Placing the core construct in the center.
    • Aligning constructs horizontally in terms of sequence.
    • Creating vertical dimension on the core sequence of the model for moderating constructs. 
    • Using arrows, directions, line thickness to indicate explicit theoretically relevant relationships in conceptualization.
  3. ‘Whys’-as-conceptual assumptions: The conceptual assumptions underlying a theory can be thought of as 'second order explanations' the implicit whys underlying an explicit answer to a specific why question.
    • Considering various typologies in the field and how various conceptual assumptions can pose a threat to coherence.
  4. ‘When/where/who’-as-contextual assumptions: specify the contextual boundaries, or conditions, that circumscribe a set of theoretical propositions.
(Adapted from group and course notes)
(Flashcards and other resources here)