Skip to main content

When sharing with AI, financial savvy still starts with human judgment

Professor Rajiv Kohli, the John N. Dalton Memorial Professor of Business at William & Mary's Raymond A. Mason School of Business, recently weighed in on whether sharing your financial data with artificial intelligence tools is safe.

As artificial intelligence becomes increasingly embedded in everyday life—from managing tasks and tutoring to building budgets and tax projections—many consumers wonder: Is it safe to trust AI with my financial information?

That's the question at the heart of a recent U.S. News & World Report article by journalist Geoff Williams, "Should You Be Sharing Your Financial Information with AI?" The story features expert commentary from cybersecurity and IT professionals across higher education, including Kohli.

Kohli, who researches technology strategy and digital transformation, offered a pragmatic take on the risks and relative advantages of using AI tools for financial planning.

"Sharing financial information with AI is no more risky than using online tax preparation software or a loan application form," Kohli told U.S. News. "In fact, it may be a little less risky because a hacker doesn't know whether you were running a hypothetical scenario."

In a follow-up conversation, Kohli elaborated on why AI platforms, particularly those based on large language models (LLMs), might even offer greater privacy protection than many assume.

"AI and LLMs glean from user data to advise the next user but do not directly share that data," he explained. "For example, AI may learn from a portfolio's performance that a 30% investment in bonds is optimal for someone with average risk tolerance. But it won't pass your personal financial data to the next user. That's a key distinction."

Understanding the real risks

The article shares that privacy largely depends on how consumers use the platform and whether they use a paid or free version. As with many online services, free AI tools may leverage user input to improve models, sometimes including human review of interactions. That's why experts, including Kohli, urge users to remain discerning and aware of the terms they agree to.

"Just like any other digital data online, AI can be hacked and data can be misused," he said. "But from a hacker's perspective, it's far more efficient to attack a financial site housing thousands of completed mortgage applications than a general-purpose AI platform where financial data is scattered, hypothetical, or de-identified."

A tool, not a substitute for caution

The key takeaway is that AI is a powerful financial planning assistant, not a vault. While it can generate investment scenarios or analyze budget trends, it should not be treated like a tax accountant or fiduciary. And it certainly shouldn't be fed full bank statements, Social Security numbers, or personally identifying information.

Instead, Kohli recommends thinking strategically about what data is needed for sound, safe output. "If you can get the insight you're looking for without including sensitive identifiers, do that," he said. "It's about finding the balance between utility of advice and the risk of 'is it worth it?'"

On the frontlines of tech ethics

Kohli's perspective adds to the growing body of thought leadership from faculty at the Raymond A. Mason School of Business, where emerging technologies are examined through a technical lens and from the vantage points of ethics, cybersecurity, and decision-making under uncertainty.

As AI continues to reshape business and society, Kohli reminds us that digital literacy and critical thinking remain our most valuable assets.

Portions of this story originally appeared in U.S. News & World Report, "Should You Be Sharing Your Financial Information with AI?" by Geoff Williams, published on August 6, 2025. Used with acknowledgment.