[ View menu ]

November 23, 2016

Researcher and postdoc positions in Computational Social Science at Microsoft Research NYC

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

MICROSOFT RESEARCH NEW YORK CITY

Microsoft Research NYC seeks outstanding applicants for researcher and postdoctoral positions in computational social science. Successful applicants will have strong quantitative and programming skills. For more information, see the call at the MSR NYC Computational Social Science website.

SPECIAL NOTE TO DECISION SCIENCE NEWS READERS

  • MSR-NYC is a seriously quantitative place. For the social science postdocs, applicants should have strong competence in computer programming, math, or statistics at the level of someone with a Bachelor’s or Master’s degree in CS, math, or stats. Simply meeting the stats requirements in a social science PhD program would not be enough to be considered.
  • In additional to having computational or mathematical skills, only applicants with computational or statistical research interests will be considered.
  • The researcher positions are similar to professorships, with a focus on discovery and publication.
  • The postdocs are good preparation for a career in academia (often taken to defer starting a professorship by a year or two) and are not intended for people looking to move into industry.

November 14, 2016

Nominate a JDM researcher for the FABBS early career impact award

Filed in SJDM ,SJDM-Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DEADLINE THIS FRIDAY NOVEMBER 18, 2016

fabbs

FABBS (Federation of Associations in Brain and Behavioral Sciences) is a coalition of scientific societies that share an interest in advancing the sciences of mind, brain, and behavior.

To recognize scientists who have made outstanding research contributions, FABBS grants early career impact awards. (Here early means within 10 years of receiving a PhD.)

Awards are rotated tri-annually among various subsets of societies that are members of this larger federation.

In 2017, the subset includes the Society for Judgment and Decision Making (SJDM).

Accordingly, we are seeking nominations for the FABBS early career impact award.

If you wish to recognize the contributions of a judgment and decision making (JDM) scholar who obtained their PhD in the last 10 years, please email the name of your nominee to Shane Frederick (shane.frederick at yale.edu) by this Friday, November 18th, 2016.

The SJDM executive board will review the set of nominees and make our recommendation to FABBS by November 30, 2016.

Those seeking more information about this award can obtain it here:

http://www.fabbs.org/fabbs-foundation/early-career-investigator-award/

November 9, 2016

4:1 longshot Trump wins election

Filed in Encyclopedia ,Ideas ,Research News ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

JUST ABOUT EVERYONE GOT IT WRONG, SOME CLASSES OF PREDICTIONS WERE LESS WRONG

usprez16sm

We know Decision Science News isn’t your main news source and assume you know that Donald Trump surprised many and won the election last night.

Models like the Princeton Election Consortium, which put Clinton’s probability of winning at 99%, probably need re-examining. Even PollyVote which averages polls, models, expert judgment, prediction markets, and citizen forecasts, forecast Clinton would win with 99% probability. It’s an average of 20 sources: none of which predicted Trump would win the most electoral votes. Historically, the average of many predictions is hard to beat.

The PredictIt prediction market (pictured above), mispredicted it though prediction markets weren’t that bad compared to other classes of forecast. In November, PredictIt was assigning Trump a 25-30% probability of winning. We bet against Trump on PredictIt when he was at 36% (2 or 3:1) and lost. This is sad for more than one reason.

Prediction market Hypermind (pictured below, lower graph is zoomed to November), fared similarly, giving Trump over a 25% chance in much of November (dates are written DD-MM-YY because Europe).

hm
hm2

The Iowa Electronic Markets prediction market results are below. This is actually a winner take all market based on the popular vote plurality winner, but it’s close enough for jazz, meaning that people probably treat it the same as if it predicted the electoral vote winner(*). Note that this chart is on a different time scale (and we don’t have time to do anything about that), but focus on the period since August to compare to PredictIt and the period since October to compare to Hypermind. They had some volatility in predictions, going from 40% Trump down to 10% and back up to 40% a week before the election, though the average November prediction is comparable to PredictIt and Hypermind.

The summary is that all the prediction markets were wrong, but they weren’t steadily predicting 10:1 against Trump either.

iem

Prediction market predictions were less wrong, going by something like Brier Score. Prediction markets predicted something near 20% to 25% Trump and a 4:1 or 3:1 horse won the race. As the French say ça arrive.

We could talk about more unique predictors like fivethirtyeight.com (below) which were volatile but still over 25% in November, and Keys to the White House, which is a simple tallying model that actually and barely predicted that Trump would win. However, we feel it’s better to talk about classes of predictions (like expert judgments or prediction markets or models) than unique cases. Also fivethirtyeight.com made three different forecasts, so, how fair is that?

538

(*) One interesting thing is that the IEM market was correctly predicting that Hillary would capture the majority of the popular (as opposed to electoral) vote going into the election. On election day, it moved the wrong way (predicting Trump would win the popular vote). The day after the election it predicted a 95% chance that Hillary would win the popular vote.

November 4, 2016

2016 SJDM conference program available

Filed in SJDM ,SJDM-Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

SOCIETY FOR JUDGMENT AND DECISION MAKING CONFERENCE STARTS FRI NOV 18, 2016

bos

What: SJDM 2016 Conference
When: November 18 to 21, 2016
Where: Sheraton Boston Hotel, 39 Dalton St, Boston, MA 02199
Special Features
* Plenary address by Linda Babcock
* Tribute to Baruch Fischhoff
* Presidential address by Dan Goldstein
* Women in JDM networking event
* Einhorn Award revelation
* Social event at a swank speakeasy

As the Society for Judgment and Decision Making conference is right around the corner, it’s time to make your last minute travel and hotel arrangements if you haven’t already. There have been quite a few early online registrations, and total registrations are expected to number around 675. It’s too late to register online, but you can do so in person at the conference (which 15% to 20% of people do). At $400 onsite for members ($200 for student members), it’s one of the least expensive conferences around. It would be cheaper than that but, you know, Boston. If you aren’t a member, you can join here for $50.

You can download the current copy of the program here. As you know, the talks were selected by a representative panel of reviewers this year and we see many amazing talks and posters on the program.

See you soon in Boston!

October 28, 2016

On-off switch: How to remember what the line and circle mean. Think binary.

Filed in Encyclopedia ,Ideas
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

THINK ONE AND A ZERO

20161031_181837

Is our new glue gun powered on or off?

We were recently in a hotel in Berlin Germany and the room heater had the line-circle (| O) symbol on it and we couldn’t remember whether line means on and circle means off, or the line means off and circle means on.

Quality German engineering made the heater silent so there was no way to tell by listening.

The next day we got home to find our new glue gun had arrived. Same problem. And glue guns take a few minutes to heat up so that’s annoying (and possibly dangerous)

Wikipedia to the rescue.

Turns out it’s best not to think of it as a line and circle.

Think of it as a 1 and 0.

Recall your computer science, logic, electrical engineering, whatever classes:

0 is FALSE, low voltage, or off
1 is TRUE, high voltage, or on

Boom. Retained for life.

Want to remember which side of your rental car the gas cap is on?

October 21, 2016

Build your own distribution builders

Filed in Encyclopedia ,Ideas ,Programs ,Research News ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

A JAVASCRIPT LIBRARY FOR ADDING DISTRIBUTION BUILDERS TO YOUR EXPERIMENTS

goldstein2003db
The drag-and-drop style Distribution Builder of Goldstein Johnson and Sharpe (2008).

dbandre
A balls-and-buckets style Distribution Builder using Quentin André’s Javascript tool.

If you read this website, you probably want to elicit probability distributions from people. A Distribution Builder (DB for short) does just that, and elicits them as cognitively-friendly frequency histograms.

The Distribution Builder was created by Dan Goldstein, Bill Sharpe and Phil Blythe in the year 2000 and its first major publication was in 2008. The DB is a digital implementation of a method that was first used, as far as we can tell, by Kabus using poker chips in 1976, as cited in this paper by Goldstein and Rothschild (2014), which found that elicitation using a distribution builder beat conventional methods.

Now for the news. Quentin André has built a JavaScript distribution builder that anyone can use and adapt. It creates ball-and-bucket style distribution builders, and has a nice Web site full of documentation.

October 12, 2016

Social science does not reward citing outside the field

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

THE SOCIAL AND NATURAL SCIENCES DIFFER IN THIS REGARD

flds2

We have talked in the past about how economics does not cite other fields much (see Pieters and Baumgartner, 2002). Are authors rewarded for writing papers this way? In social science, the answer seems to be yes.

A recent article in Plos One “The Impact of Boundary Spanning Scholarly Publications and Patents” by Xiaolin Shi, Lada A. Adamic, Belle L. Tseng, and Gavin S. Clarkson looks at the correlations between a paper’s impact and whether it cites within or across fields:

The question we ask is simple: given the proximity in subject area between a citing publication (paper or patent) and cited publication, what is the impact of the citing publication? If cross-disciplinary information flows result in greater impact, one would see a negative correlation between proximity and impact. On the other hand, if it is within-discipline contributions that are most easily recognized and rewarded, one would observe a positive correlation.

We find that a publication’s citing across disciplines is tied to its subsequent impact. In the case of patents and natural science publications, those that are cited at least once are cited slightly more when they draw on research outside of their area. In contrast, in the social sciences, citing within one’s own field tends to be positively correlated with impact.

REFERENCE:
Shi X, Adamic LA, Tseng BL, Clarkson GS (2009) The Impact of Boundary Spanning Scholarly Publications and Patents. PLoS ONE 4(8): e6547. doi:10.1371/journal.pone.0006547

October 5, 2016

ACR 2016, Berlin, Germany, Oct 27 -30th

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

ASSOCIATION FOR CONSUMER RESEARCH NORTH AMERICAN CONFERENCE GOES OUTSIDE NORTH AMERICA

br

What: ACR 2016
When: October 27-30, 2016
Where: Berlin, Germany
Conference website: http://www.acrweb.org/acr/
Conference email: ACRBerlin@rsm.nl
Registration: Registration link
Accomodation: Accomodation link

The 2016 North American Conference of the Association for Consumer Research which will be held – for the first time – outside of North America. Satisfy your Wanderlust and join us in Berlin, Germany from Thursday, October 27 through Sunday, October 30 2016 for this groundbreaking, boundary-spanning conference.

Berlin is one of the most exciting and interesting capital cities in the world. Its history – both distant and recent – has often been dramatic, leaving many signs and symbols on the city. In Berlin, the legacy of modern political struggles reverberates and feeds an amazing avant garde in art and design. It is a perfect place to broaden your academic horizons. The Maritim Hotel Berlin occupies a prime spot on the city’s Tiergarten park in the tranquil diplomatic quarter, within walking distance of the the “Kurfürstendamm” and the “Potsdamer Platz.” The area houses many parliamentary and governmental institutions, including the Bundestag in the Reichstag building, the new German Chancellery, and the residence of the German President.

Decision Science News will be there with a workshop:

Turkshop: How to experiment with the crowd

Co-Chairs:
Dan Goldstein, Microsoft Research, USA
Gabriele Paolacci, Erasmus University Rotterdam, The Netherlands

Participants:
Kathryn Sharpe Wessling, The Wharton School, University of Pennsylvania, USA
Jason Roos, Erasmus University Rotterdam, The Netherlands
Eyal Pe’er, Bar Ilan University, Israel

Come hear about the latest research about online experiments on Amazon Mechanical Turk and its alternatives. Check your assumptions about crowdsourced participants. Learn how to design online experiments in a smart way. There will be plenty of time for interactive discussion.

photo credit: https://www.flickr.com/photos/zanaguara/2480378901

September 26, 2016

Power pose co-author: I do not believe that “power pose” effects are real.

Filed in Gossip ,Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

A WELCOME BELIEF UPDATE ABOUT POWER POSES

gggs
Good scientists change their views when the evidence changes

In light of considerable evidence that there is no meaningful effect of power posing, Dana Carney, a co-author of the original article has come forth stating that she no longer believes in the effect.

The statement is online here, but we record it as plain text below, for posterity.

###BEGIN QUOTE###
My position on “Power Poses”

Regarding: Carney, Cuddy & Yap (2010).

Reasonable people, whom I respect, may disagree. However since early 2015 the evidence has been mounting suggesting there is unlikely any embodied effect of nonverbal expansiveness (vs. contractiveness)—i.e.., “power poses” — on internal or psychological outcomes.

As evidence has come in over these past 2+ years, my views have updated to reflect the evidence. As such, I do not believe that “power pose” effects are real.

Any work done in my lab on the embodied effects of power poses was conducted long ago (while still at Columbia University from 2008-2011) – well before my views updated. And so while it may seem I continue to study the phenomenon, those papers (emerging in 2014 and 2015) were already published or were on the cusp of publication as the evidence against power poses began to convince me that power poses weren’t real. My lab is conducting no research on the embodied effects of power poses.

The “review and summary paper” published in 2015 (in response to Ranehill, Dreber, Johannesson, Leiberg, Sul, & Weber (2015 ) seemed reasonable, at the time, since there were a number of effects showing positive evidence and only 1 published that I was aware of showing no evidence. What I regret about writing that “summary” paper is that it suggested people do more work on the topic which I now think is a waste of time and resources. My sense at the time was to put all the pieces of evidence together in one place so we could see what we had on our hands. Ultimately, this summary paper served its intended purpose because it offered a reasonable set of studies for a p-curve analysis which demonstrated no effect (see Simmons & Simonsohn, in press). But it also spawned a little uptick in moderator-type work that I now regret suggesting.

I continue to be a reviewer on failed replications and re-analyses of the data — signing my reviews as I did in the Ranehill et al. (2015) case — almost always in favor of publication (I was strongly in favor in the Ranehill case). More failed replications are making their way through the publication process. We will see them soon. The evidence against the existence of power poses is undeniable.

There are a number of methodological comments regarding Carney, Cuddy & Yap (2010) paper that I would like to articulate here.

Here are some facts

1. There is a dataset posted on dataverse that was posted by Nathan Fosse. It is posted as a replication but it is, in fact, merely a “re-analysis.” I disagree with one outlier he has specified on the data posted on dataverse (subject # 47 should also be included—or none since they are mostly 2.5 SDs from the mean. However the cortisol effect is significant whether cortisol outliers are included or not).
2. The data are real.
3. The sample size is tiny.
4. The data are flimsy. The effects are small and barely there in many cases.
5. Initially, the primary DV of interest was risk-taking. We ran subjects in chunks and checked the effect along the way. It was something like 25 subjects run, then 10, then 7, then 5. Back then this did not seem like p-hacking. It seemed like saving money (assuming your effect size was big enough and p-value was the only issue).
6. Some subjects were excluded on bases such as “didn’t follow directions.” The total number of exclusions was
5. The final sample size was N = 42.
7. The cortisol and testosterone data (in saliva at that point) were sent to Salimetrics (which was in State College, PN at that time). The hormone results came back and data were analyzed.
8. For the risk-taking DV: One p-value for a Pearson chi square was .052 and for the Likelihood ratio it was .05. The smaller of the two was reported despite the Pearson being the more ubiquitously used test of significance for a chi square. This is clearly using a “researcher degree of freedom.” I had found evidence that it is more appropriate to use “Likelihood” when one has smaller samples and this was how I convinced myself it was OK.
9. For the Testosterone DV: An outlier for testosterone were found. It was a clear outlier (+ 3SDs away from the mean). Subjects with outliers were held out of the hormone analyses but not all analyses.
10. The self-report DV was p-hacked in that many different power questions were asked and those chosen were the ones that “worked.”

Confounds in the Original Paper (Which should have been evident in 2010 but only in hindsight did these confounds become so obviously clear)

1. The experimenters were both aware of the hypothesis. The experimenter who ran the pilot study was less aware but by the end of running the experiment certainly had a sense of the hypothesis. The experimenters who ran the main experiment (the experiment with the hormones) knew the hypothesis.
2. When the risk-taking task was administered, participants were told immediately after whether they had “won.” Winning included an extra prize of $2 (in addition to the $2 they had already received). Research shows that winning increases testosterone (e.g., Booth, Shelley, Mazur, Tharp, & Kittok, 1989). Thus, effects observed on testosterone as a function of expansive posture may have been due to the fact that more expansive postured-subjects took the “risk” and you can only “win” if you take the risk. Therefore, this testosterone effect—if it is even to be believed–may merely be a winning effect, not an expansive posture effect.
3. Gender was not dealt with appropriately for testosterone analyses. Data should have been z-scored within-gender and then statistical tests conducted.

Where do I Stand on the Existence of “Power Poses”

1. I do not have any faith in the embodied effects of “power poses.” I do not think the effect is real.
2. I do not study the embodied effects of power poses.
3. I discourage others from studying power poses.
4. I do not teach power poses in my classes anymore.
5. I do not talk about power poses in the media and haven’t for over 5 years (well before skepticism set in)
6. I have on my website and my downloadable CV my skepticism about the effect and links to both the failed replication by Ranehill et al. and to Simmons & Simonsohn’s p-curve paper suggesting no effect. And this document.

References

Booth, A., Shelley, G. Mazur, A., Tharp, G., Kittok, R. (1989). Testosterone, and winning and losing in human competition.
Hormones and Behavior, 23, 556–571.
Ranehill, E., Dreber, A., Johannesson, M., Leiberg, S., Sul, S., & Weber, R. A. (2015). Assessing the Robustness of Power
Posing: No Effect on Hormones and Risk Tolerance in a Large Sample of
Men and Women. Psychological Science, 33, 1-4.
Simmons, J. P., & Simonsohn, U. (in press). Power Posing: P-Curving the Evidence. Psychological Science.

###END QUOTE###

September 19, 2016

Pre-conference on debiasing at the SJDM meeting in Boston Nov 18, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

RIGHT BEFORE THE SOCIETY FOR JUDGMENT AND DECISION MAKING 2016 CONFERENCE

qa

Carey Morewedge, Janet Schwartz, Leslie John, and Remi Trudel write:

We invite you to participate in a preconference on Friday, November 18th, 2016 at the Questrom School of Business at Boston University. The preconference will feature a day of talks on debiasing before the annual meeting of the Society for Judgment and Decision Making in Boston, MA. Rather than focusing on how to avoid or circumvent bias in particular context, our goal is to extend our field’s conversation about debiasing. To that end, the talks will present our state of the art knowledge on improving decision making abilities from three perspectives:

  • Who is more or less biased in their decision making?
  • Can we reduce bias within an individual?
  • When should we (not) reduce bias?

Keynote

  • Richard Larrick (Duke University)

Speakers

  • Rosalind Chow (Carnegie Mellon University)
  • Jason Dana (Yale University)
  • Calvin Lai (Harvard University)
  • Stephan Lewandowsky (University of Bristol)
  • Carey Morewedge (Boston University)
  • Emily Oster (Brown University)
  • Gordon Pennycook (Yale University)
  • Robert J. Smith (University of Miami, Harvard Law School)

The preconference is from 9am to 4pm and includes invited talks, a datablitz, lunch, and a keynote. The conference will be hosted at the Questrom School of Business at Boston University, 595 Commonwealth Ave., Boston, MA 02215. All registered attendees are welcome to submit a presentation for the data blitz, an hour of 5 minute talks. Please submit a title and abstract of no more than 150 words before September 1st for consideration. Due to space limitations, registration is on a first come, first served basis until all seats are filled. Registration costs to cover coffee and lunch is $40 for faculty and $20 for students/postdocs.

More information and a portal to sign up for the conference and submit a presentation for the data blitz, can be found here: http://blogs.bu.edu/decision/