Select Page

In 2025, something subtle but consequential shifted in the digital environment:

Realism becomes cheap

Not cheap in quality, but cheap in accessibility. Convincing language, polished visuals, fluent brand tone, even human-like interaction can now be generated at scale, at speed, and at low cost.

As a result, one of the most relied-upon shortcuts in digital life quietly collapsed:

๐˜๐˜ง ๐˜ช๐˜ต ๐˜ญ๐˜ฐ๐˜ฐ๐˜ฌ๐˜ด ๐˜ฑ๐˜ณ๐˜ฐ๐˜ง๐˜ฆ๐˜ด๐˜ด๐˜ช๐˜ฐ๐˜ฏ๐˜ข๐˜ญ, ๐˜ช๐˜ตโ€™๐˜ด ๐˜ฑ๐˜ณ๐˜ฐ๐˜ฃ๐˜ข๐˜ฃ๐˜ญ๐˜บ ๐˜ณ๐˜ฆ๐˜ข๐˜ญ.

That assumption no longer holds.

๐—ง๐—ฟ๐˜‚๐˜€๐˜ ๐˜€๐—น๐—ผ๐˜„๐—ฒ๐—ฑ ๐—ฑ๐—ผ๐˜„๐—ป

Digital distrust does not usually announce itself through outrage or mass withdrawal. Instead, it shows up through small, cumulative behaviours: a pause before committing, additional cross-checking, sensitivity to anything that feels slightly โ€œoff,โ€ and a higher bar for belief even when interest exists.

People still browse. They still engage. But trust is no longer extended by default. Belief forms more slowly and more deliberately.

This pattern is increasingly visible in consumer sentiment data.

The Thales 2025 Digital Trust Index, which surveyed over 14,000 consumers globally, found that no digital sector achieved trust levels above 50% when it came to handling personal data. Only 34% of respondents believed companies would manage their data responsibly. This matters because trust is the substrate of digital interaction: without it, engagement becomes cautious by default.

๐—”๐—œ-๐—ฒ๐—ป๐—ฎ๐—ฏ๐—น๐—ฒ๐—ฑ ๐—ณ๐—ฟ๐—ฎ๐˜‚๐—ฑ ๐—ถ๐˜€ ๐—ฐ๐—ต๐—ฎ๐—ป๐—ด๐—ถ๐—ป๐—ด ๐—ต๐—ผ๐˜„ ๐—ฝ๐—ฒ๐—ผ๐—ฝ๐—น๐—ฒ ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐—ฑ๐—ฒ ๐˜„๐—ต๐—ฎ๐˜ ๐˜๐—ผ ๐˜๐—ฟ๐˜‚๐˜€๐˜

Trust erosion today is being actively accelerated by AI-enabled deception.

The ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐—๐˜‚๐—บ๐—ถ๐—ผ ๐—ข๐—ป๐—น๐—ถ๐—ป๐—ฒ ๐—œ๐—ฑ๐—ฒ๐—ป๐˜๐—ถ๐˜๐˜† ๐—ฆ๐˜๐˜‚๐—ฑ๐˜†, surveying over 8,000 consumers across the US, UK, Singapore, and Mexico, found that 69% view AI-powered fraud as a greater threat than traditional identity theft. In Singapore, 71% said AI-generated scams are harder to detect than conventional ones.

What this reflects is a deeper shift in how people assess credibility.

Familiar digital cues โ€” fluent language, polished visuals, professional tone โ€” no longer function as reliable signals of legitimacy. As a result, trust is no longer formed instinctively and has to be rebuilt through verification.

In Southeast Asia, this shift is especially pronounced.

The ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿฑ ๐—ฆ๐˜๐—ฎ๐˜๐—ฒ ๐—ผ๐—ณ ๐—ฆ๐—ฐ๐—ฎ๐—บ๐˜€ ๐—ถ๐—ป ๐—ฆ๐—ผ๐˜‚๐˜๐—ต๐—ฒ๐—ฎ๐˜€๐˜ ๐—”๐˜€๐—ถ๐—ฎ ๐—ฟ๐—ฒ๐—ฝ๐—ผ๐—ฟ๐˜ (GASA / BioCatch / ScamAdviser) found that 63% of adults experienced scam outreach in the past year. In highly digitalised markets with frequent exposure to fraud, skepticism is a rational, learned response.

๐—ช๐—ต๐—ฒ๐—ป ๐—ฑ๐—ถ๐˜€๐˜๐—ฟ๐˜‚๐˜€๐˜ ๐—ฏ๐—ฒ๐—ฐ๐—ฎ๐—บ๐—ฒ ๐˜๐—ฎ๐—ป๐—ด๐—ถ๐—ฏ๐—น๐—ฒ

A few high-profile incidents made the risk impossible to ignore.

๐—•๐—ผ๐—ผ๐—ธ๐—ถ๐—ป๐—ด.๐—ฐ๐—ผ๐—บ publicly warned of a 500โ€“900% increase in scams over an 18-month period, explicitly linking the surge to generative AI making phishing more convincing and scalable. Travel is a trust-intensive category involving large payments, unfamiliar vendors, time pressure. When trust fractures here, it spills into broader online behaviour.

Then came the HK$200 million deepfake video-call fraud.

In early 2024, Hong Kong police reported a case in which an employee of Arup transferred approximately HK$200 million after participating in a video conference where every โ€œexecutiveโ€ on the call was synthetically generated.

This incident shattered a deep psychological anchor:

if I saw them and heard them, it must be real.

By 2025, similar techniques appeared in follow-on cases across Asia, including a Singapore-based finance director defrauded of nearly US$500,000 through deepfake impersonation.

The trust problem hiding behind AI

AI accelerated deception, but the deeper issue is how it altered the foundations of digital trust.

For a long time, professional language, clean design, confident tone, and brand-like presentation signalled credibility.

Generative AI collapsed the cost of producing these signals. When anyone can generate something that looks polished and legitimate, appearance stops working as a reliable filter.

At the same time, deception migrated into places people already trusted. Scams no longer live only on suspicious websites or random emails. They now surface inside familiar environments โ€” existing email threads, in-app messages, video calls, and branded platforms.

As a result, the burden of verification has moved to the individual. People now need to discern what is real and what is not. When the judgment fails, the consequences are personal โ€” financial, emotional, and psychological.

Over time, this trains caution into our everyday digital behaviour.

Iโ€™ve felt this first hand.

Over the past year, Iโ€™ve been approached through LinkedIn and other channels for collaborations and advisory work. Some were genuine. Some were not. What stood out was not the scams themselves, but the reflex they generated.

Even legitimate outreach now passes through an internal filter:

Is this real? Who exactly am I dealing with?

This is what digital distrust looks like in practice and once that reflex forms, it doesnโ€™t easily switch off.

๐—ง๐—ต๐—ฒ ๐—–๐—ซ ๐˜๐—ฒ๐—ป๐˜€๐—ถ๐—ผ๐—ป ๐—ฏ๐˜‚๐˜€๐—ถ๐—ป๐—ฒ๐˜€๐˜€๐—ฒ๐˜€ ๐—ป๐—ผ๐˜„ ๐—ณ๐—ฎ๐—ฐ๐—ฒ

Customer experience teams are expected to make interactions effortless โ€” fast sign-ups, minimal steps, and friendly, human-feeling journeys.

At the same time, security and risk teams are under pressure to slow things down by adding checks, verification, and controls to prevent fraud, impersonation, and abuse.

These goals pull in opposite directions, creating constant tension inside organisations.

Technology that enabled this environment now creates additional work and additional cost in new jobs in verifying legitimacy, designing trust cues, managing ambiguity, reassuring anxious users.

๐—ช๐—ถ๐—น๐—น ๐—ต๐—ฒ๐—ถ๐—ด๐—ต๐˜๐—ฒ๐—ป๐—ฒ๐—ฑ ๐˜ƒ๐—ฒ๐—ฟ๐—ถ๐—ณ๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ฎ๐—ป๐—ฑ ๐˜ƒ๐—ถ๐—ด๐—ถ๐—น๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—ณ๐—ถ๐˜… ๐—ฑ๐—ถ๐—ด๐—ถ๐˜๐—ฎ๐—น ๐—ฑ๐—ถ๐˜€๐˜๐—ฟ๐˜‚๐˜€๐˜?

Unlikely.

More checks, stronger authentication, and tighter controls may reduce fraud, but they do not restore confidence.

At best, they slow damage.

At worst, they add friction to already cautious interactions.

Regardless, belief is no longer given easily, even when systems appear secure.

As we move into 2026, this becomes a strategic reality. The cost of recovering eroding trust, operationally and reputationally, will continue to rise.

๐—ง๐—ฟ๐˜‚๐˜€๐˜ ๐—ฐ๐—ฎ๐—ป ๐—ป๐—ผ ๐—น๐—ผ๐—ป๐—ด๐—ฒ๐—ฟ ๐—ฏ๐—ฒ ๐˜๐—ฟ๐—ฒ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ฎ๐˜€ ๐—ฎ ๐˜€๐˜‚๐—ฟ๐—ณ๐—ฎ๐—ฐ๐—ฒ ๐—ผ๐˜‚๐˜๐—ฐ๐—ผ๐—บ๐—ฒ ๐—ผ๐—ณ ๐—ด๐—ผ๐—ผ๐—ฑ ๐—ฑ๐—ฒ๐˜€๐—ถ๐—ด๐—ป ๐—ผ๐—ฟ ๐—ฝ๐—ผ๐—น๐—ถ๐˜€๐—ต๐—ฒ๐—ฑ ๐—ฒ๐˜…๐—ฒ๐—ฐ๐˜‚๐˜๐—ถ๐—ผ๐—ป.

๐—œ๐˜ ๐—ต๐—ฎ๐˜€ ๐˜๐—ผ ๐—ฏ๐—ฒ ๐—ถ๐—ป๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ผ๐—ป๐—ฎ๐—น๐—น๐˜† ๐—ฏ๐˜‚๐—ถ๐—น๐˜ ๐—ถ๐—ป๐˜๐—ผ ๐—ฏ๐˜‚๐˜€๐—ถ๐—ป๐—ฒ๐˜€๐˜€ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€, ๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฒ ๐—ฐ๐—ต๐—ฎ๐—ถ๐—ป๐˜€, ๐—ผ๐—ฝ๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ป๐—ด ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐˜€๐—ถ๐—ผ๐—ป๐˜€, ๐—ฎ๐—ป๐—ฑ ๐—ด๐—ผ๐˜ƒ๐—ฒ๐—ฟ๐—ป๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—ฐ๐—ต๐—ผ๐—ถ๐—ฐ๐—ฒ๐˜€.

(Last publishedย  – Dec 2025, by Christina Lim)

Share This