Last year I wrote an article on the subjectivity of game reviews. That year saw a lot of interesting developments in games reviews, where Prey was given a 4/10 because an IGN writer’s copy wasn’t working properly, and where Jim Sterling got harrassed on all channels because he blasphemed Breath of a Wild with a 7/10 review. Both events raised public outcry because they disagreed with the scores given by both journalists. Even here at Sirus we’ve had arguments regarding review scores – the same game might be given a completely different score by different reviewers in-house, depending on their preferences and specific experiences with the game.
Maybe it’s the fault of the rating system? People have been giving scores to video games ever since people started writing about them (or maybe even before that!). This makes plenty of sense, as games are consumed in a similar fashion to their closest counterparts, films.
The experience of watching a movie has parallels to playing games, in that how you interpret the content is largely based on your background and past experiences. Factors like sex and gender, economic status, race and cultural upbringing (i.e. the difference between an Asian and an Asian American), educational attainment, age and parenthood could affect how you react to games and films. That’s generally why most people leave it up to critics and journalists to go consume these beforehand and pronounce things as great or as crap beforehand.
In both games and film, critics are assumed to be better able to interpret films due to their own experiences in consuming media. Collect enough critics and they’ll eventually start giving awards to the best ones. Just like in gaming media there’s scrutiny in these awards too – the Oscars aren’t exactly run by the most diligent bunch.
For us less particular types, there’s always Metacritic, IMDB and Rotten Tomatoes, which shows average scores from critics and from users. But those opinions aren’t exempt from the background problem – the populations on both these websites tend to differ, leading to differences in scores. Metacritic in particular aggregates review scores from different sites, but that doesn’t mean that both sites use a similarly scaled rating system – one site might set their standard (game works, is fun) at 7/10, while another site might start it at another number. Meanwhile, Steam reviews tend to be sketchy as best, even though their system is built entirely on thumbs up/down ratings and shows the playtime of each reviewer.
My personal approach to interpreting game reviews is to do a background check on the reviewer – does the site generally rate games highly? Was this review user-submitted or was it written by their staff? Is the writer a fan of games in that genre? Is the writer any good at that kind of game? (Dean Takahashi and Cuphead come to mind.) What kind of person is the writer? In this regard we can get a better idea of how accurate the review is. For starters, someone like Totalbiscuit (rest his soul) would be a good source for RTS games. It’s a flawed approach to a somewhat working system.
If there’s something that beats any rating system, it’s good ol’ word of mouth. Reviews and gameplay footage are great for deciding if a game is for your or not, but nothing beats asking your resident gamer friend (we all have one – it’s 2018) if something is worth a good buy or not. If you happen to be the resident gamer friend, you’ve probably got experience up your belt to be as good as any critic. Start local with your second and third opinions – people that are similar to you likely have the same tastes, so check your local games media for info. (Shameless self plug – Sirus updates regularly!) After all, the best reviews are the most transparent.