Managing the ethical risks of AI, big data and psychological profiling

Companies face new ethical challenges as big data, AI and behavioural science merge to influence consumer behaviour at an unprecedented scale

In the rapidly evolving landscape of digital persuasion, companies face mounting pressure to collect and use consumer data responsibly. Consumer trust continues to erode as revelations about data misuse emerge with concerning regularity. This erosion coincides with a technological revolution that has made psychological profiling and behavioural modification more accessible than ever before.

The convergence of big data, artificial intelligence, and behavioural science has created unprecedented capabilities for organisations to understand and influence human behaviour. Executives now confront difficult questions about where to draw ethical boundaries in this new frontier. Apple, known for its commitment to user privacy, developed an innovative approach to evaluate the ethical implications of their data practices. The company created what they call the “Evil Steve test,” named after their late co-founder Steve Jobs.

This thought experiment asks project team members to imagine a future where an evil CEO has taken control of Apple with the intention of harming and exploiting users. Team members must consider whether they would still feel comfortable with the data collection methods and product designs they’ve implemented under these hypothetical circumstances. If the answer is no, they return to the drawing board.

This practical exercise forces teams to confront worst-case scenarios and design products with safeguards that extend beyond the tenure of current leadership, according to Sandra Matz-Cerf, a computational social scientist who serves as David W. Zalaznick Associate Professor of Business at Columbia Business School.

Sandra Matz-Cerf, a computational social scientist who serves as David W. Zalaznick Associate Professor of Business at Columbia Business School.jpg
Columbia Business School's Sandra Matz-Cerf said that because Large Language Models have ‘read’ the entire internet, this allows them to infer psychological characteristics from pretty much any type of data input. Photo: supplied

“Apple has an interesting strategy for anticipating such negative outcomes and acknowledging the fact that data is permanent – but leadership might not be,” explained A/Prof. Matz-Cerf, who said this approach represents a significant shift from compliance-focused ethics to values-based decision-making. As organisations face increasing scrutiny over their data practices, such proactive ethical frameworks may prove essential.

The current state of psychological profiling in business

How advanced are most organisations when it comes to using big data and related technologies for psychological profiling and behavioural modification? While adoption varies, A/Prof. Matz-Cerf said the potential for sophisticated targeting continues to grow.

“Many companies have a pretty decent sense of how they can use data to not only predict but also change behaviour,” explained A/Prof. Matz-Cerf, who recently authored Mindmasters: The Data-Driven Science of Predicting and Changing Human Behaviour. “The equation is pretty simple: The more you know about your counterpart, the easier it is for you to influence their behaviour. In most cases, we’re talking about consumers, but there’s certainly potential to use similar approaches internally with employees.”

However, she suggested that explicit psychological considerations remain somewhat limited in practice. Most organisations still rely heavily on demographic data and past behaviours rather than deeper psychological insights. This gap stems partly from technological limitations that previously made psychological profiling difficult to implement at scale.

Read more: How organisational goals can undermine responsible AI implementation

The emergence of Large Language Models (LLMs) changed the landscape dramatically. These advanced AI systems have consumed vast amounts of internet content, making them remarkably adept at understanding human psychology.

“Because they have ‘read’ the entire internet, these models are behavioural experts and probably understand more about psychological science than most humans (or even researchers),” A/Prof. Matz-Cerf observed. “This allows them to not only infer psychological characteristics from pretty much any type of data input – whether that’s your purchases, searches, conversations, posts or more – but also generate content that speaks to these characteristics.”

Sam Kirshner, Associate Professor in the School of Information Systems and Technology Management at UNSW Business School, points to concerning examples of how these capabilities are already being used. He references Sarah Wynn-Williams’ memoir, Careless People, which claims “Facebook allowed companies selling beauty and weight-loss products to explicitly target teenage girls immediately after they deleted a selfie – likely due to feelings of insecurity,” said A/Prof. Kirshner, who studies how algorithms and artificial intelligence impact the process of behavioural decision-making in operations and technology management.

“While such data-driven manipulation was once limited to major tech platforms, the rise of Generative AI has made these tools more widely accessible. As a result, more companies will now have the power to influence people – for better or worse.”

Sam Kirshner_UNSW.jpg
UNSW Business School's Sam Kirshner said that when it comes to ethics and data use, competitive pressures can lead managers to find ways of justifying behaviour they might otherwise consider wrong. Photo: UNSW Sydney

Michael Yaziji, Professor of Strategy and Leadership at IMD, places these developments in a historical context: “Psychological manipulation for profit is nothing new,” he said. “Back in the 1950s, Vance Packard exposed this in The Hidden Persuaders, documenting how marketers tapped into our subconscious motivations and emotional triggers to influence what we buy. Every consumer-facing company has been doing this for decades – profiling customers and nudging buying behaviour.”

The difference today lies in the scale and precision of these techniques. “We’re not talking about broad demographic targeting anymore – they’re tracking your scrolling speed, how long you linger on content, and even your momentary emotional states,” explained Prof. Yaziji, an expert in strategy, leadership and sustainability who is currently researching the impact of artificial intelligence. “The goal remains the same as in Packard’s day – influencing behaviour – but the methods have become frighteningly precise, with modern AI processing billions of micro-signals every second.”

Potential benefits of psychological targeting

Despite valid concerns, psychological targeting offers significant potential benefits when applied responsibly. A/Prof. Matz-Cerf framed it as a neutral tool that can be directed toward positive outcomes: “As with every other technology, psychological targeting is a tool. A tool that allows us to tap into the needs and motivations of people, and change the way they interact with the world. That can create incredible opportunities for supporting people to accomplish their goals even when doing so isn’t easy.”

She highlighted a real-world example where her team partnered with US non-profit SaverLife to help low-income individuals improve their savings habits. By targeting users with messages tailored to their personality traits, they achieved meaningful results.

“Among those who received the personality-tailored messaging, 11% managed to save $100. Up from 4% among people who didn’t receive any messaging and 7% among people who were targeted with SaverLife’s best-performing generic campaign,” A/Prof. Matz-Cerf reported. “That’s still far from perfect, of course. But think of it this way: among every 100 people reached by our campaign, we had managed to get an additional five to at least double their savings and build a critical emergency cushion.”

Read more: Building socially intelligent AI: Insights from the trust game

A/Prof. Kirshner pointed to the growing role of AI as a personalised assistant. He described how freely available large language models provided him with customised travel recommendations during a recent trip to Scotland: “By telling Gemini that I’m a vegetarian who enjoys nature, visiting distilleries, nice hotels, and avoiding crowded tourist areas, I was able to get a personalised and highly relevant itinerary with activities, restaurants, and hotels, which I largely followed,” he explained.

He envisions a future where digital twins will understand not just our preferences but our deeper psychological motivations, allowing them to make increasingly complex decisions on our behalf. Even seemingly simple tasks like grocery shopping involve nuanced preferences that these systems will need to grasp.

“Take the task of buying yoghurt. My AI knows I prefer mango coconut yogurt. But what happens when the store’s AI offers vanilla Greek yogurt at a steep discount?” A/Prof. Kirshner asked. “Without knowing how price-sensitive I am, or how much I like Greek yogurt, how would my AI know which choice I’d prefer?”

The dark side of digital persuasion

Despite these potential benefits, there are significant concerns about the risks associated with psychological profiling and behavioural modification technologies. Privacy loss is a primary concern, and A/Prof. Matz-Cerf challenged the common refrain that privacy concerns only matter for those with “something to hide”.

“Here’s the problem with this kind of mindset,” she asserted. “First of all, not having to worry about your personal data getting into the wrong hands is a privilege that not all people get to enjoy (think of gay individuals in parts of the world where homosexuality is still criminalised). But more importantly, it’s a privilege that might not be granted to you tomorrow. Because your data is permanent, but the leadership of companies or governments with access to the data isn’t.”

Read more: Protecting your privacy: should AI write hospital discharge papers?

Privacy fundamentally concerns personal autonomy, and A/Prof. Matz-Cerf explained that giving up privacy means giving up control over your life and the choices you make. “Once a third party understands who you are, they can leverage the insights to influence the decision you make – whether those are as small as the toothpaste you pick or as big as the political candidate you vote for,” she said.

A/Prof. Kirshner illustrated how surveillance itself shapes behaviour with a relatable example: “I ask those who admit to being bad singers whether they ever sing in the shower. Most say yes. Then I ask: if you knew your friends were secretly listening at the bathroom door while you belted out Taylor Swift, would you still sing? Most admit they wouldn’t – they’d shower in silence out of embarrassment.”

This principle extends to many contexts, including driving habits. “The same thing happens in traffic: even if everyone is driving just 5km/h over the speed limit, the moment a parked police car comes into view, drivers instinctively hit the brakes to get under the limit,” he observed. This constant sense of being observed tends to promote conformity rather than creativity or self-expression.

Prof. Yaziji reflected on the broader societal impacts of AI, particularly regarding mental health and social connections. He referenced media critic Neil Postman’s warnings about entertainment-driven media: “Neil Postman’s warning in Amusing Ourselves to Death feels eerily prophetic now. He argued that media shapes not just what we consume but how we think, pushing society toward prioritising entertainment above all.”

Michael Yaziji, Professor of Strategy and Leadership at IMD.jpg
IMD's Michael Yaziji said that artificial intelligence has helped the process of influencing behaviour to become "frighteningly precise", with modern AI processing billions of micro-signals every second. Photo: IMD

Modern platforms like TikTok and YouTube exemplify this concern, with AI systems “designed with one purpose: keeping you glued to endless streams of bite-sized content, sacrificing depth and critical thinking along the way.” Prof. Yaziji describes this as a paradigm shift: “We’re not the consumers anymore – we’re what’s being consumed.”

Organisational approaches and ethical considerations

When asked about organisational approaches to these technologies, the experts offered varying perspectives on corporate intentions and practices.

A/Prof. Matz-Cerf took a relatively optimistic view: “I genuinely believe that most organisations have good intentions when they use data,” she asserted. “Of course, at the end of the day, the application of technologies like psychological targeting is meant to boost profits. But they often do so by creating value for customers. I think it’s rare for a company to come in and ask the question of how they can best exploit and harm their customers.”

However, she acknowledged that good intentions alone prove insufficient without proper safeguards. Without appropriate guardrails in place, even relatively benevolent actors can create harm. “This is partly because safeguarding data from outside attacks is hard, and partly because leaders will always face trade-offs, where using data in a particular way benefits consumers but not the company or vice versa,” she said.

Read more: The strategic impact of AI on business transformation

A/Prof. Kirshner offered a more cautious assessment, highlighting how competitive pressures can lead to ethical compromises: “While I would like to believe that most organisations have good intentions when they use data, competitive pressures can lead managers to morally disengage – that is, to find ways of justifying behaviour they might otherwise consider wrong.”

He described how industry norms can normalise questionable practices. If several competitors are already cutting ethical corners by launching AI tools that manipulate consumer behaviour or compromise privacy, he said managers may begin to see these actions as normal. And the more common the behaviour becomes, the easier it is to shift responsibility onto ‘industry standards’ or say, ‘everyone else is doing it.’

Comparative justification represents another common rationalisation. “A company might justify using mildly manipulative algorithms by pointing out that a rival uses far more aggressive targeting. These rationalisations allow people to maintain a sense of moral integrity while still engaging in questionable decisions,” he said.

Practical guidance for ethical implementation

For organisations seeking to implement these technologies ethically and responsibly, A/Prof. Matz-Cerf encouraged a fundamental shift in mindset: “The simplest answer is to shift from asking yourself what you can legally get away with to asking yourself what is ethical and aligned with your core values. It sounds so simple, but in my experience, the former approach dominates all too often,” she said.

Emerging technical solutions that address privacy concerns while maintaining personalisation benefits are an important consideration in this process. Instead of pooling data on centralised servers, she said machine learning approaches like federated learning train algorithms directly on devices, ensuring that sensitive information never leaves users’ hands. Apple’s Siri, for example, or Google’s predictive text on Android devices, leverage the computing power of your smartphone to train their models locally.

The adoption of these privacy-preserving technologies continues to accelerate. “Although the transition to techniques like federated learning won’t happen overnight, their adoption is already expanding rapidly,” A/Prof. Matz-Cerf noted. In addition to companies like Google making much of their foundational research accessible through academic papers and open-source frameworks (such as TensorFlow Federated), she said there is a growing industry of consulting companies supporting the integration for small- to medium-sized businesses that might lack access to internal expertise.

Modern platforms like TikTok are designed to keep users glued to endless streams of bite-sized content.jpeg
Modern platforms like TikTok are designed to keep users glued to endless streams of bite-sized content: “We’re not the consumers anymore – we’re what’s being consumed," says IMD's Michael Yaziji. Photo: Adobe Stock

A/Prof. Kirshner emphasises the importance of public education and awareness. To ensure an ethically beneficial approach to AI development and use, he said organisations and their leaders must look beyond internal compliance frameworks and consider the broader ecosystem in which their technologies operate. “One of the most effective long-term strategies is to increase public awareness and digital literacy,” he affirmed.

He highlighted the important role of consumer demand in driving positive change, similar to sustainability trends in other industries. Just as growing consumer demand for sustainability is starting to reshape industries like fashion and food, he said a well-informed public can create new market dynamics around ethical AI. “If people begin to prioritise privacy, fairness, and explainability when choosing digital services, this will open the door for start-ups to build ethically grounded alternatives and put pressure on larger tech firms to reform questionable practices,” he said.

Key takeaways for business professionals

For business leaders navigating the complex landscape of psychological profiling and behavioural modification technologies, six practical insights emerge from these expert perspectives:

First, implement proactive ethical frameworks like Apple’s Evil Steve Test. By considering worst-case scenarios for data usage, organisations can build safeguards that transcend current leadership and market conditions. This approach acknowledges that data collections created today will outlive current management teams and governance structures.

Second, recognise that technological capabilities now make psychological targeting accessible at scale. With the emergence of Large Language Models, organisations can implement sophisticated psychological profiling without specialised expertise. This democratisation creates both opportunities and responsibilities.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

Third, focus on value creation rather than exploitation. The most sustainable applications of these technologies help consumers achieve their goals rather than manipulating them against their interests. The SaverLife example demonstrates how personalised interventions can support positive behavioural change.

Fourth, explore privacy-preserving technologies like federated learning. These approaches enable personalisation without centralised data collection, potentially resolving the tension between privacy and customisation. Growing support from major tech companies and consultancies makes these solutions increasingly accessible.

Fifth, watch for signs of moral disengagement within your organisation. When teams justify decisions by pointing to industry norms or competitor behaviour, they may be rationalising ethically questionable practices. Create space for explicit ethical discussions separate from legal compliance considerations.

Finally, prepare for increasing consumer awareness and demand for digital ethics. As public understanding grows, companies with strong ethical practices may gain competitive advantages similar to those enjoyed by sustainable brands in other sectors. Early adoption of responsible approaches may yield long-term market benefits.

The path forward requires balancing technological possibilities with ethical responsibilities. As A/Prof. Sandra Matz-Cerf explained, psychological targeting remains fundamentally a tool – one that can either support or undermine human autonomy and wellbeing. The difference lies not in the technology itself but in how organisations choose to implement it.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy