In support of Mental Health Awareness Week, Brand Union are promoting mental wellbeing in the workplace with staff taking 10mins out of their day for a virtual mindfulness session. MindBubbles (launching in app stores later this month) is the world's first VR mindfulness experience specifically designed by our cognitive scientist to tackle stress and anxiety.
Our VR app teleports staff from a busy workplace to a choice of relaxing virtual spaces. They inhabit an avatar which provides a body ownership illusion and are guided through the session by a beautiful voice and floating bubbles. It allows you to stop your mind racing and control your thoughts with feedback from the lovely staff at Brand Union reassuringly positive:
“The problem with Headspace is that its often hard to concentrate and I find my mind wandering after 30 seconds. This helps you concentrate so much more easily”.
Consumer research and Behavioural Economics are not always the easiest bed fellows. Through Behavioural Economics we now understand why people are often unaware of what influences their behaviour. So, by definition, as they are not aware of it, they are often incapable of accurately reporting these influences. However a core assumption of market research is that people do have complete insight into the working content of their minds, and hence predicting the future based on the responses to direct questions will be accurate.
In qualitative research (focus groups, interviews etc) good behavioural researchers can read respondents and get a feel of what their behaviour might be. However it takes a brave researcher to confidently predict respondents will do something when they have not said they would. But often the opinion of good qualitative researchers is not enough for clients to make big decisions, and they look to quantitative research to get ‘hard evidence’ that predicts the way people will behave. In other words, they feel the need to get ticks in boxes, or these days clicks on web based surveys, that categorically show the majority of a large number of people say they will do one thing or another. However, if Behavioural Economics is right, this approach may not always lead to an accurate prediction.
Put simply, people may believe they are reporting accurately on surveys, but are they unreliable witnesses to their own behaviour?
Certainly, recent opinion poll failings on huge issues such as Brexit, the 2015 election and Scottish independence, all point to this. And if people fail to predict - when asked - how they are going to act on major issues, how reliable are they going to be in reporting on the things that matter less, such as whether a new variant of cereal will be eaten by their children or whether a change in pack design will make it more appealing? The truth is no doubt somewhere in between, where people are able to accurately report some things about their opinions and behaviour but are, in fact, rather poor at reporting others.
But this still puts research in a tricky position, as it is commonly asked to produce hard evidence that allow marketers to make the important business decisions, while acknowledging that Behavioural Economics suggests the responses it gathers may not always be reliable as they wished. Clearly all ways in which to maximise the accuracy of responses in research need to be reconsidered.
So what is the problem?
Well it may sound like a daft question, but it is a fundamental one. What exactly is a brain for? This is something I often ask people and the answer that usually comes back is to think, or reason, or sometimes to feel. Modern thinking from psychological and neuroscience disciplines is somewhat different.
One undeniable truth about the brain is that it is a product of evolution. As such its core function, by definition, was to give us an evolutionary advantage. Make us better at being what made us the top mammal, or more accurately, what made us the best hunter gatherer (as in evolutionary terms not enough time passed for us to extensively evolve beyond that). Taking aside some of the brain’s core biological / autonomic functions (such as breathing, hunger etc), the received wisdom is that the brain’s primary job is to predict what happens next.
What this means is that in any given scenario or situation, it absorbs a huge amount of information in any situation we are in, then based on that, it automatically produces information in terms of feelings and emotions that give us information about what might occur in the next few seconds. Say, for example, one of our hominid ancestors in the plains of Africa, heard a rustle in the bushes. From previous experience and acquired world knowledge, it sounded like it was large, and could be a potential threat.
To allow them to survive, a quick automatic response, one that does not involve thinking or consideration is by far the most likely to allow our ancestor to survive. As such an instant feeling of ‘fear’ and the appropriate response of ‘avoid’ would lead to a much greater chance of survival. If our ancestors were actually designed to ‘think’ and ‘consider’ by bringing previous information to our mind and consciously considering it, any response would have been slow and ponderous, and their ability to end up as lunch would have been infinitely more likely than ending up as a parent.
The other factor of course is that evolution only allows for efficiency, that is the maximum output for minimum (calorific) input. Although the brain on average weighs 3 pounds (about 2% of the total body mass) it uses 20% of the energy we consume. If we think or concentrate for a long time we feel tired, with good reason; thinking uses a surprising amount of energy.
This is why the general belief amongst psychologist and neuroscientists is that the thing that gave us the biggest evolutionary advantage, was for our brains to evolve the ability to quickly, and accurately (enough), predict immediate responses to our environment. The implication of this ‘what happens next’ function means that the brain is functioning best (i.e. doing what it is primarily designed to) when it is immersed in a situation and is automatically generating feelings about it.
The implication for research, and its tradition of direct questioning of course is that the approach is not getting the brain to do its core function. Direct questions prompt the brain to have to consciously mentally reconstruct something, either a past event or a future situation. i.e. remember something that happened or imagine a new scenario. Although the brain can do this and it can be accurate, the quality of this ‘emulated’ experience is dramatically reduced.
Put simply, even though it is the most complex thing we will ever experience, it would need to be many times bigger, and vastly more complicated to actually recreate the totality of any experience. As such, the conscious mental reconstructions we end up with are summaries of real experiences, and because of this they can by definition not be completely accurate. So in research terms asking people questions, getting them to consciously reconstruct future or past events will only get a response that is derived from a reduced level of information, hence this is one of the reasons that what people say is not always what they do, and why predicting behaviour from direct questioning cannot always be entirely accurate.
So how can we maximise accuracy of responses?
Of course the only reliable way to understand behaviour is to observe it. Through ethnography we can see what people really do, how they behave and understand this without the danger of self-report clouding the answer. But such approaches are expensive and time intensive, and there are always issues over whether observing is actually changing the behaviour. There is no doubt that as life logging technology improves, it will give a fuller picture of behaviour and it is likely that this technology will become a staple approach in the research industry. However, we can only observe what currently exists. Often research is asked to provide commentary on scenarios that don’t yet exist; new adverts, point of sale, pack designs and product ideas and extensions, that cannot be observed in the real world. We believe answers to these kinds of research challenges can be found in the virtual world.
Catching the thing you didn’t drop
We had a client in our supermarket. He picked up a box of cereal and it slipped. His whole body instantly reacted as his left had shot out to grab the cereal, to stop it falling, accompanied by a small ‘Agh!!’. Nothing unusual here, except he had not actually dropped anything…
The supermarket in question was virtual and he was using a headset and controllers to interact with it. He’d picked up a box of Coco Pops, his finger slipped on the controller and the virtual representation of the world he was in represented it falling from the shelf to the floor.
Even though none of it existed, his instant autonomic response was to react as if it had really happened. What was more interesting was that he muttered a little and then picked it up with the controller and put it back on the shelf. He did not even realise that his left hand had tried to catch something that wasn’t there. It was only afterwards when we showed him the video of this that he saw the humour, as well as how his brain had been tricked into believing something had happened that never did.
Putting someone in a virtual world makes the most of the brain when it is performing its primary function. Being immersed in a situation and then responding automatically in a way that will predict what happens next. In the case above, if a thing drops you try and catch it. It’s what the brain unthinkingly does. Brains just happily respond to the virtual environment the same as if it were real.
Just as an aside, I will admit myself that I have been demonstrating virtual supermarkets for quite a while and I still find myself walking round the basket that we included on the floor for people to pick up and put things in. Even though I know it’s not there, my brain automatically still tells me to avoid it in case I kick it or trip over. It is all of these ‘brain stem’ based reactions, getting the brain to do what it was designed to do, that is the key to increasing the accuracy of responses in research.
The new virtual world that awaits
The debate Behavioural Economics haves started is the extent to which we are aware of the things that influence our behaviour. This has prompted other observations, specifically in research, that if we are not aware of those things, we are by definition not going to able to report them in a survey or to a researcher. Not because we don’t want to but simply because we don’t know, and because it involves our brains doing something that is not its primary function, namely consciously emulating future and past events. Virtual reality provides a solution to this.
The most common question we get asked is “isn’t it unnatural?” or “don’t people feel odd in it?" My response is that current convention of asking direct questions or getting people to talk about something they would not normally talk about in a group of people they have never met, is far less natural. Virtual Reality actually allows us to do research where the brain is functioning as evolution designed it to, experiencing an environment, and responding to it as if it is real. It is only through convention that we are used to surveys and focus groups as providing answers.
Direct questioning either in qual or quant has been an optimal tool in the researchers tool box up to now, but new VR and AR technology allows us to explore a new world of research that is likely to yield far more predictive research to these current approaches. The potential for VR based research is vast. From the creation of virtual environments through to clever combinations of 360 video and CGI allow us to put new shop fixtures, adverts and packs in stores that have been filmed. Although this is predominantly still a qualitative tool, we have collaborated with a research panel provider to produce the UK’s first VR panel, opening up the prospect of VR on a quant scale.
With companies like Google, Microsoft, Apple, Facebook, Samsung and HTC betting on this being the next paradigm shift, the future of commercial behavioural research could well see some astonishing changes in the next three years. As penetration of the technology expands, virtual quant research can be conducted to a depth that up to now can only be imagined. Respondents can, in their own home, take part in car clinics run on a huge quant scale with people being able to sit in, experience and change aspects of the design so brands can get real time feedback. Respondents in their living room will be able to walk around new store layouts and shop designs.
Already respondents can see new products on shelves or new pack designs, in the context they will see them in store, to see if they are appealing. All of this will allow a whole level of behavioural research that at the moment can only be imagined.
Exciting times are ahead.
‘Magical’ examples of VR and AR, particularly in entertainment and advertising, are propelling the industry towards meaningful and measurable use cases, most notably in education, healthcare and consumer research. The latter has been a bit of surprise to us, however our cognitive scientist has applied the psychology of thought, learning and mental processing to VR and AR to unlock new insights and behaviours. And we’re seeing a transformation in the research sector.
We know, from Behavioural Economics, that people don't make judgements in isolation, a number of things in our environment inform decision-making that we are not always aware of. Decisions are influenced by the context in which they occur, so asking someone about shopping habits in a focus group isn't always optimal. We also know, from implicit testing, that people don't always express how they truly feel about a brand or product when their rational, conscious brain is engaged. Sentiment is influenced by unconscious ‘in the moment’ thought, so asking questions after the event is only half the picture.
VR and AR is enabling brands to place people in the right context, with the right stimulus, to unlock insights and behaviours which were previously unavailable with traditional research tools and techniques. Take shopping, for example, very few brands and retailers have a physical store where they can test shopper behaviour in a realistic environment. Our partnerships with brands, agencies and research companies is therefore adding meaningful and measurable value to the marketing process.
Our scientist, Dr Ali Goode, is publishing a more in-depth piece (“Virtual Reality: the Research Tool for Behavioural Economists) later this week however three examples to whet your appetite…
Placing respondents in a VR room-scale CGI environment (supermarket, pharmacy, DIY store etc) allows researchers to observe real consumer behaviours and decisions. Respondents walk down an aisle with their virtual basket and pick up 3D models of products, whilst our proprietary mixed reality technology shows real-time video of their actions. It’s a step change in the quality of research stimulus, from 2D to 3D, and allows us to test brand positioning, packaging design, product concepts and point of sale.
Supplying a panel with a bespoke AR app enables them to place 3D models of products in physical, contextual environments (e.g. shampoo bottle on bathroom shelf, full size car on driveway) to capture their preferences. Participants view ‘virtual’ products in a real environment and investigate interactive design features such as logo, shape, colour etc. It allows qualitative research to be scaled to a quantitative sample and delivers increased stimulus, deeper insight and shorter feedback times on brand positioning, product concepts and packaging design.
Immersing participants in 360 videos of real environments (high street, airport, hospital etc) delivers contextual and scalable insight into consumer preferences. Participants are placed in 360 video scenarios with decision-trees (i.e. UI of different options) and assets can also be changed using CGI (e.g. ad copy on poster site) to test responses. It provides access to behavioural insight which wasn't possible before and allows us to research ethnography, signage, point of sale, ad copy, media attention and scenarios such as doctor-patient consultation.
Who'd have thought consumer research was cool ;)
Please get in touch if you'd like to find out more:
Let’s consider VR as a useful tool, and perhaps even a productive enhancement to human interaction, bringing together people from around the world to engage and interact — regardless of social, economic or geographic disparities. In the abstract as well as the applied, modern education is poised to take advantage of this latest tech innovation.
Over the last few years, VR has moved from being the purview of the military and aviation to the mainstream of professional development, as managers, instructors, coaches and therapists have claimed increasing benefit from immersive experiences. While statistics on VR use in schools and colleges have yet to be gathered, the steady growth of the market is reflected in the surge of companies solely dedicated to providing schools with packaged educational curriculum and content, teacher training and technological tools to support VR-based instruction in the classroom.
Perhaps the most utopian application of this technology will be seen in terms of bridging cultures and fostering understanding among young students. Much of this early foray into VR-based learning has centered on the hard sciences — biology, anatomy, geology and astronomy — as the curricular focus and learning opportunities are notably enriched through interaction with dimensional objects, animals and environments. The World of Comenius project, a biology lesson at a school in the Czech Republic that employed a Leap Motion controller and specially adapted Oculus Rift DK2 headsets, stands as an exemplary model of innovative scientific learning.
In other areas of education, many classes have used VR tools to collaboratively construct architectural models, recreations of historic or natural sites and other spatial renderings. Instructors also have used VR technology to engage students in topics related to literature, history and economics by offering a deeply immersive sense of place and time, whether historic or evolving.
In what may turn out to be an immersive education game changer, Google launched its Pioneer Expeditions in September 2015. Under this program, thousands of schools around the world are getting — for one day — a kit containing everything a teacher needs to take their class on a virtual trip: Asus smartphones, a tablet for the teacher to direct the tour, a router that allows Expeditions to run without an Internet connection, a library of 100+ virtual trips (from the Great Wall of China to Mars) and Google Cardboard viewers or Mattel ViewMasters that turn smartphones into VR headsets.
This global distribution of VR content and access will undoubtedly influence a pedagogical shift as these new technologies allow a literature teacher in Chicago to “take” her students to Verona to look at the setting for Shakespeare’s Romeo and Juliet, or a teacher in the Bronx to “bring” her Ancient Civilizations class to the ancient Mayan ruins at Chichen Itza.
Access to some type of mobile VR device is affordable for many more individual users and, in turn, many more schools. Despite the fact that VR is still developing, real progress has been seen in the economic scaling of the technology. The cost to the consumer of VR hardware (headsets, in particular) has steadily declined, as noted in the head-mounted displays (HMDs) commercially available today: Google Cardboard for $20 and Samsung Gear VR for $99 (at this writing, Oculus Rift, a desktop VR device, is available for pre-order for $599).
Teachers and students alike are seeking an ever-expanding immersive landscape, where students engage with teachers and each other in transformative experiences through a wide spectrum of interactive resources. In this educational reality, VR has a definitive place of value.
Credit to Elizabeth Reede at TechCrunch.
2016 became the year of mental health awareness, with mental illness no longer the secret illness, and everyone is now openly talking about it. Prince Harry, notably, came forward to talk about his own experiences following the death of his mother and he recently hosted an event for Heads Together, a mental health charity, where he advocated talking openly about the things which bother us.
VR has already been used as part of exposure therapy by replicating real life scenarios which enable users to face their fears within the simulated environment. For example, someone with a crippling fear of heights would be able to enter an experience on an aeroplane, having to watch from the window as the ground moves away from them. If their fear becomes too much, all they have to do is remove the headset, take the time they need to recover and try again.
By replicating certain scenarios, users can be placed within environments that could otherwise make them uncomfortable, like someone with Autism having to use public transport. By being present for long enough to realise that their concern was unnecessary (because nothing came of the situation they dreaded), it reassures their safety and reminds them they have the power to manage.
VR can benefit patients with post-traumatic stress disorder (PTSD). Adapting VR software to offer eye movement desensitisation and reprocessing (DMDR), an existing treatment for PTSD, could help improve the current reprocessing system. The standard process asks a patient to describe something traumatic or disturbing that has happened to them while simultaneously following a moving object with their eyes. By requiring multitasking, the memory recalled and the concentration involved in directing sight softens the focus on the difficult recollection, potentially making it less vivid and therefore less traumatic.
We think VR can be a force for good in treating one of the top five chronic diseases of the modern age. Our cognitive scientist has already started to write our first program so watch this space for announcements...
We've been experimenting with a HTC Vive mixed reality rig over the last few months, where a green screen allows a broader audience to be involved in the user's experience. It makes VR more social, shareable and there are a number of business applications. That said, we couldn't help ourselves and hacked a quick Thump Trump game together for a Digital Catapult event.
Google released more details about Daydream last night and it promises to make Virtual Reality a mass market channel when it launches next month.
VR baked into Android OS
Daydream is deeply integrated into Android Nougat OS with Google’s Pixel phones VR ready out-of-the-box. As with everything Android, Daydream is expected to scale across multiple manufacturers with Samsung, Asus, HTC, Huawei and ZTE confirmed to produce Daydream-ready phones. To be optimised for VR, smartphones need high quality SoC (system on chip) to maintain 60 frames-per-second, low persistence displays to reduces lag and high-end sensors to reduce latency. This is the next evolution of mobile powered VR and should deliver user ‘presence’ and make longer experiences inside virtual spaces more enjoyable.
Daydream View Headset with Controller
The headset is a serious upgrade from Cardboard, it leap-frogs Samsung GearVR and we’ll have to see how it performs compared to tethered headsets like HTC Vive and Oculus Rift. Google will be opening up their tech to other companies so it’s likely we’ll see different versions of the headset next year. We really like the inclusion of a controller - a cross between a Wii remote and Apple TV touch remote - which already makes HTC Vive so intuitive compared to headset buttons or fledgling gesture controls. It in effect brings touchscreen functionality to virtual worlds.
Play Store App Ecosystem
The entire Play Store is accessible while wearing the headset which is a cool feature to discover new apps. This level of navigation moves VR closer to the mainstream as a seamless experience. The announcement to make existing Google apps Daydream compatible reinforces this approach, with Google Maps, YouTube, Photos and Movies automatically viewable in VR. The gold rush will start in earnest with Netflix, Hulu, HBO and New York Times already on board. Unity and Epic support also makes the development of advertising experiences an exciting prospect.
Last but not least...
Google also confirmed that Project Tango will work with Daydream, teasing the future integration of three-dimensional depth and motion which digitally maps your physical environment. At this point, the lines between VR and AR blur into Mixed Reality. The platform shift to visual computing platforms is accelerating at breakneck speed so now is the time for marketers to define their strategy.
Virtual Reality and Augmented Reality have reached the Slope of Enlightenment on the emerging technology curve and they’re now acknowledged as the next computing platform. The VC community invested $1.2bn in the first quarter of 2016, Digi-Capital predict the sector to be worth $150bn by 2020 and everyone from Microsoft to Samsung to Facebook are rolling out consumer products. It’s like Search in 2000, Social in 2005 and Mobile in 2010 - so how are marketers responding to the platform shift?
PepsiCo, Disney, Ford, Nestlé, Coty, Carlsberg, Audi, Specsavers, eBay, Lego, IKEA and a host of others have demonstrated the various use cases for VR, AR and Mixed Reality. It’s fair to say we’re still in a test-and-learn phase however brands like L’Oreal and Sky now have strategic programs in place. We therefore provide practical steps for marketers to develop long-term programs to succeed in an increasingly complex market.
1. Start with a clear challenge
Solutions should solve a tangible business challenge so an open conversation about “What’s your challenge?” is more progressive than “What can you do?” The industry is an evolving tool box of software, hardware and content options, so successful programs are often bespoke to a specific challenge. Productised solutions, like face tracking specialists Modiface, have a clear role in the cosmetic category however it’s one option in a spectrum of possibilities. Start with a strategic goal and an agnostic, open-mind about the options. Don’t put the proverbial cart before the horse.
2. Identify new consumer triggers
Consumer journey planning, with existing customer segmentation and user experience design, can usually identify moments in the journey where there’s friction or leakage. The option to bridge physical environments with digital experiences can expedite purchases and provide new customer value. Amazon, and more recently Pinterest, help customers find online products by pointing their mobile camera at something in the real world and hyperlinking the results to ratings, reviews and a shopping basket. Identify tangible customer benefits as an integrated part of existing marcoms. Don’t place cool technology before customer value.
3. Set tangible business metrics
Measurable KPIs which demonstrate business value are surprisingly sparse in this sector however it’s maturing and therefore merits assessment alongside other marketing channels. The opportunity, for example, to drive more interactions from FMCG packaging than brands receive from other digital media channels can transform a fizzy drink purveyor into a major media owner. Approach VR and AR as a scalable solution rather than just a campaign implementation. Don’t leave the research team and data analysts out of the conversation.
4. Build programs not campaigns
Pilots are an essential part of evaluating new channels in the marcoms mix however an ‘AR campaign’ should be considered ‘phase one’ in a broader strategy. The platform shift to natural visual interfaces creates a post-screen world where the two-dimensional confines of traditional computing becomes obsolete. Understanding how to engage consumers, to tell a story in a new medium, will be more important than a Mobile First strategy. Don’t hire a Mobile manager, hire a Computer Vision manager.
5. Assess software performance carefully
The success of a program is partly down to the robustness of the software so it’s essential to identify, select and develop the optimum solution to deliver smooth customer experiences. The reality is, most companies offer niche products within a much broader industry and the performance between suppliers varies considerably. For instance, the speed and accuracy of scanning product packaging varies from 0.3 seconds to over 6 seconds. Importantly, brands can seize the opportunity to develop their own IP and retain data ownership as an integrated IT solution. Don’t fall into the trap of the Emperor’s New Clothes.
6. Develop hardware agnostic solutions
There’s a number of VR headsets on the market, ranging from a $5 smartphone-in-a-headset Google Cardboard to a $600 tethered-to-a-computer Oculus Rift, as well as a handful of headsets which deliver see-through experiences - the next generation of Google Glass - from the $950 Meta2 to the amazing $3,000 Hololens which uses gesture and voice commands. We’re in the early stages of consumer adoption so smartphones represent the immediate opportunity to deliver VR and AR experiences. Don’t tether your strategy to one platform.
7. Create contextually relevant content
Creativity is more important than ever - with exciting ways to engage audiences - and the need to rethink standard methods of storytelling. Instagram, SnapChat, Tumblr, Musically, Pinterest and the renaissance of gifs and emojis represent a behavioural shift in how people visually communicate. AR and VR experiences are a rich territory for brands which require new skillsets. The domains of 360 video, CGI and 3D modelling need to be wrapped in strong user experience design. Don’t leave creativity in technological void.
85% of our combined senses are commanded by sight - with a third of our brain dedicated to visual processing - so technology is now catching up with the human operating system. We’re moving to a screen-less world where the human eye is enhanced by smart lenses. Apple, the King Maker of the smartphone, has quietly been acquiring computer vision start-ups and a Sir Jony Ive product can’t be too far away. Brands who view VR and AR as a strategic channel now, will have a competitive advantage when it happens.
Microsoft have added Outlook to HoloLens so users can splash their inbox and calendar events onto surfaces whilst they work on other things. There’s no getting away from email in the future however, with a suite of Windows productivity apps, the platform shift to natural visual interfaces moves from entertainment to meaningful utility. Industry investment in VR/AR businesses exceeds $1.1bn so far this year (compared to $700m for the whole of 2015) so ‘screenless’ Mixed Reality computing should edge closer to mainstream as headsets reduce in price. We’ll look back whimsically at those four screen and dual screen marketing campaigns.
The platform shift from desktop to mobile to computer vision is more natural for consumers than it is for advertisers. Hololens, MagicLeap and, in this case, Meta, are using neuroscience to build intuitive 'smart glasses' which replace laptops and mobiles. Whilst AR and Mixed Reality is a few years away from mass adoption, advertisers and agencies already creating custom Visual Marketing experiences are better placed to understand how to automate engagement with consumers in the future. We're perhaps 2 years away from programmatic AR but the inventory is unlikely to be remnant banners and buttons (!!) so advertisers will need to deliver engagement rather than impressions to merit a place on the real estate of a person's sight. If a consumer looks at an object in the real world through AR glasses, the advertising context needs to truly enhance rather than interrupt their view.
Gorilla In The Room
Industry views and news