[ View menu ]

November 11, 2011

Postdoc @ Yahoo! Research (NYC)

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

QUANTITATIVE / COMPUTATIONAL POSTDOC AT YAHOO RESEARCH IN NEW YORK CITY

The Human Social Dynamics Group in Yahoo! Research is seeking highly qualified candidates with strong quantitative and computational skills for a post-doctoral research scientist position. The successful candidate will contribute to the HSD group’s research agenda, and also work on applied problems of relevance to Yahoo! consumer and advertising products. Candidates must have completed (or be able to complete before starting) a PhD, preferably but not necessarily in CS/IS, statistics, or in quantitative social science and must be skilled programmers.

Desired skills include but are not limited to:
* Mining web data
* Statistical analysis of large data sets
* Building 3rd party apps (Open Mail, Facebook)
* Designing, building, and running web-based “virtual lab” experiments
* Designing, building, and running web-based field experiments

Application Instructions

Applications, including a CV and names/addresses of three referees, should be emailed to:

Duncan Watts, djw at yahoo-inc.com

For more information on HSD personnel and publications, see

http://research.yahoo.com/Duncan_Watts
http://research.yahoo.com/Sharad_Goel
http://research.yahoo.com/Dan_Goldstein
http://research.yahoo.com/Jake_Hofman
http://research.yahoo.com/Siddharth_Suri

November 1, 2011

SJDM Newsletter and Conference Program are ready for download

Filed in Conferences ,SJDM ,SJDM-Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

SOCIETY FOR JUDGMENT AND DECISION MAKING NEWSLETTER & 2011 CONFERENCE PROGRAM

This weekend (Nov 4-7, 2011) is the annual Society for Judgment and Decision Making Conference. Find the preliminary program for the conference in the current SJDM newsletter available for download here.

It’s not too late to make a spontaneous decision to attend the conference. Here are the facts.

WHERE: Sheraton Seattle Hotel & Washington State Conference Center, 1400 Sixth Avenue, Seattle, WA
WHEN: Nov 4-7, 2011
REGISTRATION: It’s is too late to register early, do it once there
MAP: http://bit.ly/tnuhCz


View Larger Map

October 24, 2011

Further advice for navigating the waters of mediation analysis

Filed in Articles ,Encyclopedia ,Ideas
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

MEANINGLESS MEDIATION

Decision Science News has posted before on Zhao, Lynch, and Chen’s practical article on mediation analysis. John Lynch has written the following, re-emphasizing the article’s main points:

Meaningless Mediation
John G. Lynch, Jr., University of Colorado
January, 2011

In August of 2010, JCR [Journal of Consumer Research – Ed] published an invited paper by Zhao, Lynch and Chen on common abuses of mediation analysis.

Zhao, Xinshu, John G. Lynch, Jr., and Qimei Chen (2010), “Reconsidering Baron and Kenny: Myths and Truths about Mediation Analysis,” Journal of Consumer Research, 37 (August), 197-206.

In a note accompanying the paper, the editor suggested that authors either follow its recommendations or take them into account if they chose to use an alternative approach. The paper made four points. As I observe how the paper is being used and adopted by JCR authors and authors at other journals, the least original of our recommendations is the most widely adopted, so in this note I want to restate the recommendations in order of importance.

1. Consider the discriminant validity of the mediator. Our single most important point is stated on the last page of Zhao et al. Many, many reports of mediation tests in consumer research and psychology are utterly meaningless because the authors have not demonstrated that the mediator is distinct from the independent variable or the dependent variable. When it is not distinct, the data will appear to support “full mediation” in Baron and Kenny’s terms and “indirect only” mediation in the parlance of Zhao et al.

A great many meaningless mediations are published in leading journals in which the mediator M is essentially a manipulation check (and hence, no discriminant validity from X) or an alternative measure of the conceptual dependent variable (and hence, no discriminant validity from Y). Some reviewers looking for any evidence of process may give “partial credit” for even meaningless mediations; this would encourage defensive insertion of meaningless mediation analyses by authors. We could save a lot of page space by deleting reports of these mediation results from the pages of JCR, JMR, and JCP. Until very recently, I have not seen much evidence that the Zhao et al. paper has had any deterrent effect on this error.

2. Embrace partial mediation and use unexpected “direct” effects to stimulate theorizing about omitted mediators. Our second most important point was that X-Y relationships are likely to have multiple mediators, and we researchers are usually not smart enough to test for more than one. In that case, it is likely that the data will sometimes indicate “indirect only” mediation (or “full mediation in Baron and Kenny’s terms), but more often will support either the “competitive mediation” or “complementary mediation” outlined by Zhao et al. Here, an unexplained direct effect of X on Y accompanies a significant indirect effect X – M – Y as posited by the researchers. Followers of Baron and Kenny viewed those direct effects with mild embarrassment. We pointed out that the sign of the direct effect can often be a hint to the sign of some omitted mediator. I should note that model misspecification and omitted variable bias can lurk as easily in data that seem to be consistent with “indirect only” (“full”) mediation as in data where there is an unexplained direct effect. The great advantage of the latter case is that the sign of the direct effect gives the authors some tip that there is more to learn, and a hint of what direction to look – for omitted indirect paths matching the sign of the “direct” effect. Write to me for an easy-to-understand example of “indirect only” results hiding omitted variable bias due to an omitted second mediator.

3. Test only for the indirect effect X – M – Y and not for an “effect to be mediated.” The Baron and Kenny procedure required that authors show a significant zero order effect of X on Y to establish “an effect to be mediated.” We showed that this effect is algebraically equivalent to the “total effect” of X on Y: the sum of the indirect effect of X on Y through M and the direct effect of X on Y. We noted that this total effect test is meaningless or superfluous. If the signs of the direct and indirect effects are opposite, it is easy to fail to observe an “effect to be mediated” or to observe an “effect to be mediated” of the wrong sign despite strong evidence for the posited indirect pathway. If the signs of the direct and indirect effects are the same, the test of the zero order effect of X on Y will always be significant when the indirect effect is significant – hence the test is superfluous here. We pointed out how nonsensical it was to treat a result as publishable when a posited indirect effect matched the sign of an unexplained direct effect, but not in the equally likely case in which the unexplained direct effect was opposite in sign. Ironically, about the time our paper was coming out, I received a rejection from a top journal with an AE report citing, among other failings, the marginal significance of the “effect to be mediated” in one of two replications.

4. Use Preacher and Hayes boostrap instead of Sobel test. The least important and least original point in Zhao et al. is, ironically, the one that seems to have caught on: use bootstrap tests rather than Sobel tests for the indirect effect X-M-Y. This one is a “no brainer.” Bootstrap tests using the very simple-to-use Preacher and Hayes (2008) macro are almost always more powerful than Sobel tests for reasons explained in our paper. There are no published bootstrap tests of mediation of within-subjects effects, where Sobel tests can be used. But in the usual between-subjects case, authors should head to Andrew Hayes website http://www.afhayes.com/spss-sas-and-mplus-macros-and-code.html

Though many consumer researchers have started using bootstrap tests, I have had colleagues tell me that reviewers told them to remove bootstraps tests and replace with Sobel. AEs should be vigilant to contradict such clearly incorrect advice if it appears in JCR reviews.

Though not emphasized in Zhao et al., the other major advantage of the Preacher and Hayes (2008) macro is that it makes it easy to test multiple mediator models.[1] Most published mediation tests consider a single mediator, though we assert in Zhao et al. that most X-Y relations likely have multiple mediators. Authors who are insightful enough to posit dual mediators almost always test each one piecewise using the Baron and Kenny tests we criticized. That’s wrong. With the Preacher and Hayes macro, it takes the same single line of code in SPSS or SAS to specify a multiple mediator model as to specify a single mediator model.

[1] Use MPLUS to analyze latent variable versions of the same multiple mediator models.

October 17, 2011

Poker is a game of skill: is mutual fund management?

Filed in Articles ,Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

LUCK VS SKILL IN MUTUAL FUND MANAGEMENT

This week, two fun Econ-Finance papers, both with a Chicago link.

First is Steven Levitt and Thomas Miles‘ analysis of whether poker is a game of skill:

Using newly available data, we analyze that question by examining the performance in the 2010 World Series of Poker of a group of poker players identified as being highly skilled prior to the start of the events. Those players identified a priori as being highly skilled achieved an average return on investment of over 30 percent, compared to a -15 percent for all other players. This large gap in returns is strong evidence in support of the idea that poker is a game of skill

It is a game of skill, they conclude. Read more here.

Next the famous FamaFrench duo ask the same question of mutual fund management. They find [drumroll] there is a bit of evidence for skill (and its opposite) in the extreme tails of the distribution:

The aggregate portfolio of actively managed U.S. equity mutual funds is close to the market portfolio, but the high costs of active management show up intact as lower returns to investors. Bootstrap simulations suggest that few funds produce benchmark-adjusted expected returns sufficient to cover their costs. If we add back the costs in fund expense ratios, there is evidence of inferior and superior performance (nonzero true α) in the extreme tails of the cross-section of mutual fund α estimates.

However, they wouldn’t advise trying to find a top manager over passive indexing:

In other words, going forward we expect that a portfolio of low cost index funds will perform about as well as a portfolio of the top three percentiles of past active winners, and better than the rest of the active fund universe.

REFERENCES

Levitt Steven D. and Thomas J. Miles. (2011). The Role of Skill Versus Luck in Poker: Evidence from the World Series of Poker, NBER Working Paper No. 17023. [link]

Fama, Eugene, F. and Kenneth R. French (2010) Luck versus Skill in the Cross-Section of Mutual Fund Returns. The Journal of Finance, LXV(5). [link]

Photo credit: http://www.flickr.com/photos/vizzzual-dot-com/2655969483/

October 11, 2011

Our research meets Saturday Night Live

Filed in Articles ,Gossip ,Ideas ,Programs ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

 

AWKWARD FUTURE SELF INTERACTIONS

Decision Science News readers know about Hal Hershfield and Dan Goldstein‘s experiments in which they exposed people to interactive virtual-reality movies of their future selves to see how it would impact their saving behavior (pictured above). The idea was sent up in three Saturday Night Live fake commercials for Lincoln Financial (hat tip: Jake for alerting us). The SNL interactions with the future self were a lot more awkward than ours, but maybe that’s a good thing for changing behavior.

Links (mildly disturbing):

After seeing that, you may want to check out a selection of more wholesome media concerning our research.

Hershfield, H. E., Goldstein, D. G., Sharpe, W. F., Fox, J., Yeykelis, L., Carstensen, L. L., & Bailenson, J. N. (2011). Increasing saving behavior through age-progressed renderings of the future self. Journal of Marketing Research, 48, S23-S37.

Wall Street Journal Article Meet ‘Future You.’ Like What You See?

New York Times Article Some novel ideas for improving retirement income

Allianz report featuring the research Behavioral Finance and the Post-Retirement Crisis

October 5, 2011

Do cents follow Benford’s Law?

Filed in Ideas ,R
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

MANY THINGS FOLLOW BENFORD’S LAW. CENTS DON’T.

A commenter on our last post brought up Benford’s law, the idea that naturally occurring numbers follow a predictable pattern. Does Benford’s law apply to the cent amounts taken from our 32 million grocery store prices?

Benford’s law, if you don’t know about it, is an amazing thing. If you know the probability distribution that “natural” numbers should have, you can detect where people might be faking data: phony tax returns, bogus scientific studies, etc.

In Benford’s law, the probability of the leftmost digit of a number being d is

log10(d + 1) – log10(d)

(according to the Wikipedia page, anyway).

In the chart above, we plot the predictions of Benford’s law in red points. The distribution of leftmost digits in our cents data is shown in the blue bars.

As the chart makes plain, cents after the dollar do not seem distributed in a Benford-like manner. And no one is “faking” cents data. It is simply that Benford’s is a domain specific heuristic. Grocery store prices seem to be chosen strategically. Look at all those nines. Perhaps dollar (as opposed to cent) prices in the grocery store are distributed according to Benford, but we leave that as an exercise for the reader.

Want to play with it yourself? The R / ggplot2 code that made this plot is below, and the anyone who wants our trimmed down, 12 meg, version of the Dominick’s database (just these categories and prices) is welcome to it. It can be downloaded here: http://dangoldstein.com/flash/prices/.

if (!require("ggplot2")) install.packages("ggplot2")

orig = read.csv("prices.tsv.gz", sep = "\t")
orig$cents = round(100 * orig$Price)%%100
# First digit
orig$fd = with(orig, ifelse(cents < 10, round(cents), cents%/%10))  
sumif = function(x) {sum(orig$fd == x)}
vsumif = Vectorize(sumif)
df = data.frame(Numeral = c(1:9), count = vsumif(1:9))
df$Probability = df$count/sum(df$count)
ben = function(d) {log10(d + 1) - log10(d)}
df$benford = ben(1:9)
p = ggplot(df, aes(x = Numeral, y = Probability)) + theme_bw()
p = p + geom_bar(stat = "identity", fill = I("blue")) 
p = p + scale_x_continuous(breaks = seq(1:9))
p = p + geom_line(aes(x = Numeral, y = benford, size = 0.1))
p = p + geom_point(aes(x = Numeral, y = benford, color = "red", size = 1))
p + opts(legend.position = "none")
ggsave("benford.png")

ADDENDUM 1:

Plyr / ggplot author Hadley Wickham submitted some much prettier Hadleyfied code, which I studied. It wasn't quite counting the same thing my code counts, so I modified it so that they do the same thing. Most of the difference had to do with rounding. The result is here:

orig2 <- read.csv("prices.tsv.gz", sep = "\t")
orig2 <- mutate(orig2,
  cents = round(100 * Price) %% 100,
  fd = ifelse(cents<10,round(cents),cents %/% 10))
df2 <- count(orig2, "fd")
df2 <- mutate(df2,
  prob = prop.table(freq),
  benford = log10(fd + 1) - log10(fd + 0))
ggplot(df2, aes(x = fd, y = prob)) +
  geom_bar(stat = "identity", fill = "blue") +
  geom_line(aes(x = fd, y = benford, size = 0.1)) +
  geom_point(aes(x = fd, y = benford, color = "red", size = 1)) +
  theme_bw()
  scale_x_continuous(breaks = seq(1:9))

Playing with system.time() if found that the "mutate" command in plyr is slightly faster than my two calls to modify "orig" and much more compact. Lesson learned: Use mutate over separate calls and over "tranform".

My application of calls to sum(orig$fd==x) is faster than Hadley's count(orig2, "fd"), however "count" is much more general purpose and faster than "table" according to my tests, so I will be using it in the future.

ADDENDUM 2:

Big Data colleague Jake Hofman proves that awk can do this much faster than the R code above:

zcat prices.tsv.gz | awk 'NR > 1 {cents=int(100*$2+0.5) % 100;
counts[substr(cents,1,1)]++} END {for (k in counts) print k,
counts[k];}' > benford.dat

Jake's code runs in about 20 seconds, mine and Hadley's runs in about a minute.

September 30, 2011

Dollars and cents: How are you at estimating the total bill?

Filed in Ideas ,R
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

ROUNDING HEURISTICS AND THE DISTRIBUTION OF CENTS FROM 32 MILLION PURCHASES

When estimating the cost of a bunch of purchases, a useful heuristic is rounding each item to the nearest dollar. (In fact, on US income tax returns, one is allowed to round and not report the cents). If prices were uniformly distributed, the following two heuristics would be equally accurate:

* Rounding each item up or down to the nearest dollar and summing
* Rounding each item down, summing, and adding a dollar for every two line items (or 50 cents per item).

But are prices uniformly distributed? Decision Science News wanted to find out.

Fortunately, our Alma Mater makes publicly available the famous University of Chicago Dominick’s Finer Food Database, which will allow us to answer this question (for a variety of grocery store items at least).

We looked at over 32 million purchases comprising:

* 4.8 million cereal purchases
* 2.2 million cracker purchases
* 1.7 million frozen dinner purchases
* 7.2 million frozen entree purchases (though we’re not sure how they differ from “dinners”)
* 4.1 million grooming product purchases
* 4.3 million juice purchases
* 3.3 million laundry product purchases and
* 4.7 million shampoo purchases

The distribution of their prices can be seen above. But what about the cents? We focus down on them here:

As is plain, there are many “9s prices” — a topic well-studied by our marketing colleagues — and there are more prices above 50 cents then below it. The average number of cents turns out to be 57 (median 59).

In sum (heh), it pays to round properly, though we do think some clever heuristics can exploit the fact that each dollar has on average 57 cents associated with it.

Anyone who wants our trimmed down, 11 meg, version of the Dominick’s database (just these categories and prices) is welcome to it. It can be downloaded here: http://dangoldstein.com/flash/prices/.

Plots are made in the R language for statistical computing with Hadley Wickham’s ggplot2 package. The code is here:

if (!require("ggplot2")) install.packages("ggplot2")
library(ggplot2)
orig = read.csv("prices.tsv.gz", sep = "\t")
summary(orig)
orig$cents = orig$Price - floor(orig$Price)
#sampledown
LEN = 1e+06
prices = orig[sample(1:nrow(orig), LEN), ]
prices$cents = round((prices$Price - floor(prices$Price)) *
    100, 0)
summary(prices)
p = ggplot(prices, aes(x = Price)) + theme_bw()
p + stat_bin(aes(y = ..density..), binwidth = 0.05,
    geom = "bar", position = "identity") + coord_cartesian(xlim = c(0,
    6.1)) + scale_x_continuous(breaks = seq(0, 6, 0.5)) +
    scale_y_continuous(breaks = seq(1,
    2, 1)) + facet_grid(Item ~ .)
ggsave("prices.png")
p = ggplot(prices, aes(x = cents)) + theme_bw()
p + stat_bin(aes(y = ..density..), binwidth = 1, geom = "bar",
    position = "identity",right=FALSE) + coord_cartesian(xlim = c(0, 100)) +
    scale_x_continuous(name = "Cents", breaks = seq(0, 100, 10)) +
    facet_grid(Item ~ .)
ggsave("cents.png")

September 19, 2011

OPIM Professorship at Wharton, rank open

Filed in Jobs ,SJDM
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

PROFESSORSHIP AT THE DEPARTMENT OF OPERATIONS AND INFORMATION MANAGEMENT (OPIM), THE WHARTON SCHOOL, UNIVERSITY OF PENNSYLVANIA

whar

The Operations and Information Management Department at the Wharton School, the University of Pennsylvania, is home to faculty with a diverse set of interests in decision-making, information technology, information-based strategy, operations management, and operations research. We are seeking applicants for a full-time, tenure-track faculty position at any level: Assistant, Associate, or Full Professor. Applicants must have a Ph.D. (expected completion by June 30, 2013 is acceptable) from an accredited institution and have an outstanding research record or potential in the OPIM Department’s areas of research. Candidates with interests in multiple fields are encouraged to apply. The appointment
is expected to begin July 1, 2012 and the rank is open.

More information about the Department is available at:
http://opimweb.wharton.upenn.edu/

Interested individuals should complete and submit an online application via our secure website, and must include:

-A cover letter (indicating the areas for which you wish to be considered)
-Curriculum vitae
-Names of three recommenders, including email addresses [junior-level candidates]
-Sample publications and abstracts
-Teaching summary information, if applicable (courses taught, enrollment and evaluations)

To apply please visit our web site:
http://opim.wharton.upenn.edu/home/recruiting.html

Further materials, including (additional) papers and letters of recommendation, will be requested as needed. To ensure full consideration, materials should be received by November 14th, 2011, but applications will continue to be reviewed until the position is filled.

Contact:
Maurice Schweitzer
The Wharton School
University of Pennsylvania
3730 Walnut Street
500 Jon M. Huntsman Hall
Philadelphia, PA 19104-6340

The University of Pennsylvania values diversity and seeks talented students, faculty and staff from diverse backgrounds. The University of Pennsylvania is an equal opportunity, affirmative action employer. Women, minority candidates, veterans and individuals with disabilities are strongly encouraged to apply.

September 12, 2011

Enter your strategy in a tournament, win thousands of Euros

Filed in Programs ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

SECOND SOCIAL LEARNING STRATEGIES TOURNAMENT: 25,000 EUR PRIZE MONEY

DSN received the following announcement, which should be of interest to agent-based modelers out there. The first tournament led to a Science paper, not a bad outcome.

We would like to invite you, the members of your research group, and your colleagues to participate in The Second Social Learning Strategies Tournament, which we hope will interest you. The tournament, which has a total of 25,000 euro available as prize money, is now open for entries.

The tournament is a competition designed to establish the most effective means to learn in a complex, variable environment.

In recent years, there has been a lot of interest (spanning several research fields, but especially economics, anthropology, and biology) in the problem of how best to acquire valuable information from others. The first Social Learning Strategies Tournament, inspired by Robert Axelrod’s famous Prisoner’s Dilemma tournaments on the evolution of cooperation, attracted over 100 entries from all around the world, and a paper detailing the results was published in the journal Science in 2010*. The high level of interest convinced us that it would be worthwhile to organise a second tournament in which some of the restricting assumptions of the first could be relaxed, so as to explore a broader range of questions.We have received funding for this from the European Research Council, and a committee of world-leading scientists have helped us to design the tournament game, including Sam Bowles (Santa Fe Institute), Rob Boyd (UCLA), Marc Feldman (Stanford), Magnus Enquist (Stockholm), Kimmo Erikkson (Stockholm) and Richard McElreath (UC Davis).

Entrants will be required to submit behavioural strategies detailing how to respond to the problem of resource gain in a complex, variable environment through combinations of individual and social learning.

Three extensions to the first tournament game will (i) explore the effects of learners being able to select from whom to learn, (ii) allow agents to refine existing behavior cumulatively, and (iii) place the action in a spatially structured population with multiple demes. A total of 25,000 euro prize money is available, divided into three 5,000 euro prizes for the best strategy under any single extension, and a 10,000 euro prize for the best strategy under all three extensions.

The competition is now open for entries, with a closing date of

February 28 2012. More information can be found at:

http://lalandlab.st-andrews.ac.uk/tournament2/

We would like to encourage you, the members of your laboratories, and your colleagues and collaborators, to participate in this competition.Please do forward this message to anyone you think might be interested. We would also be grateful if you would print out and post the attached flier on your notice boards, and forward it to anyone you think might be interested.

We hope that this tournament will increase understanding of, and stimulate research on, the evolution of learning, as Axelrod’s tournament did for the evolution of cooperation.

*Rendell et al. (2010) Why copy others? Insights from the Social Learning Strategies Tournament. Science 328: 208-213

Image credit: Goldstein, D. G. (2009). Heuristics. In P. Hedström & P. Bearman (Eds.), The Oxford Handbook of Analytical Sociology. (pp. 140-164). New York: Oxford University Press. [Download]

September 5, 2011

Publish your health nudges

Filed in Articles ,Programs ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

SPECIAL ISSUE OF HEALTH PSYCHOLOGY ON BEHAVIORAL ECONOMICS

Health has a major impact on both individuals and nations. Health problems
can impact a person’s emotional, financial and social state; they can also
affect a nation’s financial and social standing. Indeed, countries across
the globe are currently battling the increasing costs of health care
delivery, while others are trying to modernize their systems. Furthermore,
most nations face similar health related challenges such as reducing
unhealthy behaviors (poor diet and smoking), increasing healthy behaviors
(exercising), assisting disadvantaged population gain better access to
health services, and improving adherence to medical treatment.

According to the Surgeon General’s Office the leading causes of mortality in
the U.S. have substantial behavioral components. It is no wonder, therefore,
that both psychologists and economists have been among the pioneers in
studying components associated with health behaviors and have provided a
range of successful behaviorally based prevention and treatment options.
Yet, the sheer extent of these problems calls for a more interdisciplinary
approach. In recent years a growing number of researchers have turned to
behavioral and experimental economics in the hopes of providing additional
insights to facilitate positive health behavior changes.

The aim of this special issue is to bring together the latest research in
behavioral and experimental economics on health related issues, stimulate
cross disciplinary exchange of ideas (theories, methods and practices)
between health economists and psychologists, and provide an opportunity to
simulate novel and creative ways to tackle some of the most important health
challenges we currently face. This special issue will be of interest not
only to a diverse range of researchers but to health professionals,
practitioners and policy makers alike.

With this call for papers, we hope to attract manuscripts that are
outstanding empirical and/or theoretical exemplars of research on any health
related topic from a behavioral and/or experimental economic perspective. We
anticipate studies will focus on a range of topics, including, but not
limited to: Smoking, Dietary choices, Adherence to treatment, Decision
making, Risk taking behavior, Choice architecture, Information asymmetry and
use of monetary incentives to alter behavior. We expect papers to reflect a
variety of methodologies but to highlight implications of the research for
practitioners and policy makers.

Authors should submit a short proposal (maximum of 400 words) that outlines
the plan for a full manuscript* to Yaniv Hanoch, PhD *and* Eric Andrew
Finkelstein*, PhD, guest editors for the special issue, by *March 1, 2012*.
The proposal should outline the study question, methods and findings of the
proposed submission and note how the paper will align with the theme of the
special issue. *Submissions are due August 1, 2012.* Papers should be
prepared in full accord with the *Health Psychology* Instructions to Authors
and submitted through the Manuscript Submission
Portal.
All manuscripts will be peer reviewed. Some papers not included in a
specific special section may be accepted for publication in *Health
Psychology* as regular papers. Please indicate in the cover letter
accompanying your manuscript that you would like to have the paper
considered for the Special Series on Health Psychology meets Behavioral
Economics.