Studios, producers and agencies receive countless spec scripts – original feature length screenplays that weren’t the product of a paid writing assignment. To filter through hundreds and thousands of submissions to determine which are worth considering, or to identify writers with potential, the industry hires professional script readers to rate scripts.
The readers who write “coverage” can play a vital role in screenwriters’ prospects, but what do they think makes for a good script? One the industry’s leading data scientists, Stephen Follows, has attempted to answer this exact question in a year long study, using data from the coverage scores of 12,309 feature film screenplays, that resulted in a 65-page report.
The scripts analyzed were a mix, from amateurs to award winners, all of which were submitted to Screencraft – a service used by screenwriters to submit to contests, fellowships, or pay to have their script covered. According to Screencraft co-founder John Rhodes, ScreenCraft’s freelance readers all have at least one year of experience reading scripts for a major production company, studio, agency or management company, and most of ScreenCraft’s readers are currently employed in the industry as professional readers for top companies including Paradigm, UTA, Amazon, Warner Bros, Blumhouse, The Black List, and Nicholl Fellowships.
Follows spent 12 months combing through anonymized data on ScreenCraft’s servers looking for the most interesting correlations, in an effort to decode a group of largely anonymous industry gatekeepers. In an interview with IndieWire, Follows explained the goal was to answer the question of what readers think a good script contains.
For Follows, one of the most satisfying aspects of his study was that it proved there were supposed “rules” that screenwriters should actually ignore.
“I think it was nice to discount some things that get talked about that I’ve often instinctively felt were people wasting their time and worrying about things that didn’t matter,” said Follows. “It was nice to sort of free up writers from that and just say, ‘look, just do what’s best for your script.’ This isn’t a poisoned thing that if you put it in your script, it’ll drag it down.”
For example Follows showed there was no correlation between the use of Voice Over – often cited as being a storytelling crutch and a sign of bad writing – and a script’s overall rating. He also demonstrated that rules about page length had been way over blown, as scripts ranging between 90 and 130 pages had largely uniform score results. Only at the page length extremes, less than 90 pages and over 130, did scores start to go down
One of Follows biggest findings was in separating scripts by genre. While historical-based screenplays often would score higher, the real finding was that inside each genre there were individual ingredients that readers were looking for.
“It makes sense when you think about it — genre is a promise to the viewer of what to expect in a film,” said Follows. “So for example, catharsis is something that’s important in all scripts, but for family films it was the number one thing, whereas with action films plot was much more important than catharsis.”
Follows’ results show that knowing what is expected and what readers are looking from the genre can be very helpful to a writer. He warns though that this may leave great, out-of-box writing out in the cold.
“Would ‘Reservoir Dogs’ pass a script reader?” questioned Follows. “I don’t know, maybe it would. But truly great scripts that are unusual and break some mold might fail, the way the industry has set things up. Maybe we’re shooting for the middle by looking for this. Maybe not. I don’t know.”
To swear or not to swear is a question screenwriters often debate, especially when writing their action description. Follows’ results show that swearing not only doesn’t hurt a screenplay’s overall score, it actually can help when script readers grade a writer’s “voice.”
You can find the entire report here.
via Chris O'Falt @ Indiewire