116 3rd St SE
Cedar Rapids, Iowa 52401
Home / Business News / Columns
Securely leverage AI for your business
Cut through the hype and learn the simple rules to securely use AI while protecting your brand's most valuable asset: trust
Tracy Pratt
Oct. 19, 2025 4:30 am
The Gazette offers audio versions of articles using Instaread. Some words may be mispronounced.
Dear Favorite Business Leader,
Ever feel like you can't escape the AI hype? It's everywhere, promising to supercharge productivity, unlock new insights and streamline operations. But for every leader excited about the potential, there's another wary of the risk. They see the headlines, hear the warnings, and ask: "If I put my customer data or private business information into an AI tool, will it expose my info to the world?"
That's a valid concern, and it gets to the heart of what's most valuable to any company — trust. The trust of your clients, the confidentiality of your trade secrets, and the integrity of your brand are non-negotiable. The good news is you don't have to choose between innovation and security. With the right strategy, you can leverage AI's power while building a fortified wall around your sensitive information.
Public Wi-Fi vs. private network
Think of AI tools like different types of internet connections. You wouldn't use a public coffee shop's Wi-Fi for a confidential board meeting, just as you shouldn't treat every AI tool the same way. The first, and most important, step is to understand the difference between public, consumer-grade AI and private, enterprise-level tools.
Public tools are the free chatbots your team might use for brainstorming or drafting a memo. The input you give them might be used to improve the model, and while these tools often anonymize it, the risk of a data leak is real. For example, in the 2018 Cambridge Analytica scandal, it was revealed that millions of Facebook users’ data was improperly collected through a third-party app. While not a direct AI leak, it's a powerful and relatable example of how seemingly harmless interactions on public platforms can expose sensitive information.
Establish a clear policy
For any business, a clear policy for responsible AI use is the simplest and most important solution. This isn't about blocking the technology, it's about providing clear guidelines. Your policy should encourage smart, secure AI use. Here are some fundamental rules:
- Opt out of data sharing. Most major AI providers offer a privacy setting that prevents your new chats from being used for training. Make this a mandatory part of how your team sets up and uses these tools.
- Never input sensitive information. The simplest way to protect your data is to not share it in the first place. Your team should be trained to avoid entering any documents, emails, or reports that contain customer names, financial figures, legal information, or any other proprietary data.
- Keep it high-level. Don't share specific company plans. Instead of asking the AI to "rewrite the project plan for the Q4 launch," ask for "a template for a project plan that includes a timeline, milestones, and resource allocation." This gets you a useful framework without giving away your business’ specific strategy.
- Use AI for public information only. These free tools are great for tasks that use general knowledge, like summarizing public news articles, generating ideas for marketing copy (e.g., suggesting five different headlines for a blog post or creating a quick list of keywords to include in a social media post), or helping to write a non-confidential email. Keep your use cases to information that is already in the public domain.
- Establish a clear data governance policy. A data governance policy could include a rule like "All AI-generated content must be fact-checked and reviewed for brand voice before publication." A rule like this, mandating oversight, can help provide clarity to employees and prevent accidental data leaks.
For deeper integrations
When you're ready to move beyond public tools and tackle more complex projects — the kind that requires feeding a model with your customer interactions, sales figures, or R&D notes — you need to graduate from the consumer space. This is where enterprise-grade AI platforms come in. These are a worthwhile investment for any business that relies on its data to drive efficiency, deliver smarter customer service, or gain a competitive advantage. Companies like Google, Microsoft, and Cohere offer dedicated business solutions where your data is used only for your purposes, within a secure, isolated environment.
And it's not just tech companies getting into the game. Take the example of EY (Ernst & Young), which launched its EY.ai platform in 2023. What's truly remarkable about this is the sheer scale and scope of the commitment. The $1.4 billion investment signals that this isn't a small-scale pilot project, but a core business strategy. Instead of creating a stand-alone AI product, EY is embedding AI directly into its existing global technology platform, EY Fabric, which is used by more than 60,000 clients. This is a crucial data security move. It means client information stays within EY’s secure, trusted ecosystem and is not shared with public models. Their focus on upskilling their workforce with AI knowledge also underscores that even at this massive scale, the future of AI is human-led and secure.
For larger organizations, the options expand to include advanced technical safeguards. One such technique is “federated learning.” While the name might sound like you’re collaborating to train a model, its primary use for a single business is to unlock insights from decentralized data. For example, a global bank can use federated learning to train an AI fraud detection model by learning from data across different branches or regions — without any of the raw, sensitive customer data ever leaving its original location. It's a way to unlock the full potential of your company's data while keeping it in a secure, isolated environment.
Tools and resources
I get a little geeky when it comes to this stuff, so if you’d like to learn more, here are some resources:
- Public AI privacy settings: Check the data and privacy policies on services like ChatGPT and Google Gemini.
- Enterprise-grade AI platforms: Explore how major providers handle data privacy for business customers: Microsoft Azure AI, Google Cloud AI, and Cohere Enterprise.
- Privacy-enhancing technologies: For a deeper dive into the concepts of federated learning and differential privacy, here are some beginner-friendly resources:
- [indent?] Google's "How Federated Learning Protects Privacy."
- [indent?] IBM's “Simplified Guide to Federated Learning.”
Ultimately, the choice to leverage AI for your business depends on strategy, but it doesn't have to be a gamble. By understanding the difference between public and private models, developing data governance policies, and empowering your people with the right tools, you can build a culture of responsible AI that not only protects your data but also strengthens the trust that is the foundation of your brand.
Brandfully yours,
Tracy
Tracy Pratt, a Cedar Rapids marketing professional with expertise in communication, consumer behavior and AI strategy, believes in blending data with storytelling to help businesses build stronger relationships. Message her on LinkedIn: linkedin.com/in/1tracypratt.