In this week’s Navigating AI article, Dr. Maya Wilson, Senior Director of Data Science at Syndio, offers a clear guide to AI adoption in a fast-moving tech landscape. As companies embrace AI it's important to consider security, compliance, and bias. She emphasizes aligning AI tools with your mission and values, balancing innovation with responsibility. I hope this piece sparks thoughtful conversations within your organization.
-Raffi
At Syndio, where I am the Senior Director of Data Science, we build technology that helps companies comply with global pay transparency regulations and make intelligent pay decisions at scale. Part of my job is staying on top of developments in AI and enabling our employees to use AI in ways that accelerate their work, improve our operations and drive innovation and productivity across the company.
With the huge leaps in AI capabilities over the last several years, there’s been a proliferation of new tools available to businesses and consumers. Existing software has also increasingly incorporated new AI features to drive higher levels of insights, improve user experiences and leverage unstructured data, like text and images. I tell my colleagues that AI probably won’t replace their jobs, but there's a high likelihood that people who are skilled at using it will. Not all tools are created equal, though, and the right ones for your organization will depend on its mission and protocol.
In order to maximize your organization's investment in AI and drive positive outcomes, you must first evaluate AI tools through the lens of the following concerns:
How sensitive is the information you’re entering?
The first thing to consider is data privacy. Some AI tools, such as the consumer version of ChatGPT, learn from the interactions with their users, meaning they can incorporate the information from your inputs (prompts) into the system, and there is a risk of that information being shared with another user.
This isn’t always a problem: I don’t care if ChatGPT learns what’s in my fridge when I'm asking for suggestions for a dinner recipe. However, if I’m summarizing notes from a customer call or getting feedback on our future product strategy, it could be problematic if that information “leaked” to another ChatGPT user. To avoid this, you must ensure that sensitive information is restricted to AI tools that do not retain (referred to as “training”) on your inputs, and that those tools meet your company’s security and compliance requirements. Most paid enterprise versions of AI tools provide this protection, but it’s worth verifying.
Another data issue relates to privacy laws, such as the EU’s General Data Protection Regulation. At Syndio, we help hundreds of customers analyze millions of employee records that contain a lot of sensitive data. As a result, we must be extremely careful to ensure that this information stays protected — which automatically rules out the AI tools that train on the inputs for any use cases that touch customer data. Large Language Models that touch your customer’s data should be treated as data sub-processors, and therefore come with transparency and accountability requirements. Even email addresses can be considered personally identifiable information (PII) under the GDPR, requiring additional scrutiny and potentially customer notification to avoid exposing your organization to fines or contract breaches. Tools that are intended for use with any sensitive data should be critically evaluated for their security posture and consistency with your commitments to your customers.
What are the regulations in your customers’ states and countries?
It’s important to look into these regulations and research impending ones before you sign a contract. For example, depending on where your customers are based, they may be covered by regulations such as the General Data Protection Regulation or the California Consumer Privacy Act, which have data-deletion requirements, and localization laws, which restrict data transfers across national borders and may require data to be processed in the same location as the customer.
In Europe, the GDPR includes provisions related to automated decision-making and a right to explanation (though the specific scope and interpretation of this right are still being debated), while the proposed EU Artificial Intelligence Act aims to strengthen the right to explanation for high-risk AI systems. (Because of the EU AI Act, there are sometimes hundreds of questions on our use of AI, because the companies have to evaluate whether it will put them at risk.) Similar rights are being explored and implemented in other countries. For example, Canada’s CPPA, Brazil’s LGPD and India’s DPDP Act are creating a global trend toward stringent privacy and AI regulation.
What is the vendor doing to mitigate bias?
Ethical considerations should always be top of mind. Like many companies, yours might use middleware to bring capabilities into your products rather than building yourselves. If these capabilities include AI, you’ll need to be able to answer critical questions from your customers about those capabilities, too. The EU AI Act in particular creates a high burden for software providers to demonstrate how they have met the requirements for everything in their products, whether it's bought or built in-house. Depending on the risk designation, this can range from ensuring the end user is aware that AI is being used to comprehensive technical documentation on bias mitigation efforts, human oversight, and data quality and representativeness analysis. Therefore, it’s really important to make sure that anyone you work with creates transparency around what they’re doing with your customer’s data and how it suits your customer’s considerations.
What is more important to you: accuracy or creativity?
Some use cases thrive on creativity and innovation, such as talking through product strategy or writing creative marketing content, while others require strict accuracy, such as responding to enterprise RFPs (requests for proposals) and analyzing contracts. Tools like ChatGPT are great for creative work, but there’s a risk that they make things up, referred to as hallucination. Sometimes this is fine and even useful. (I like it when ChatGPT suggests out-of-the-box ideas, because I can evaluate whether they are worth pursuing and disregard them if they aren’t.) Other tools are much better at being grounded in fact, such as Google’s NotebookLM, which works as a research assistant and only looks at the information you provided, citing its sources for everything it puts forward. This makes it much more useful and reliable for applications that require strict accuracy, but it’s a lot less creative and innovative with its output as a result. The right AI tool will vary by task. If you don’t get the results you’re looking for, it’s worth experimenting with a few other tools, always keeping security and privacy in mind.
How much onboarding is required?
Sometimes there is a perception that AI is going to solve all of your problems. But some of these tools require a lot of onboarding in order to make them useful for your organization, which ends up looking like a lot of time for your team. It’s important to look at the speed to value: Does it improve over time? Does it create enough value to justify the investment?
If you still have to curate and prepare all of your source material (such as with RFPs), it will take awhile — in our case, several weeks — before you reap the benefits, especially if the team that needs to do the prep work is not already trained in AI. And then, of course, you have to repeat it every time you update.
What is the vendor’s roadmap for innovation?
If you invest the startup time to get the tool working and your team trained, you don’t want it upended when the company’s competitors surpass their capabilities in a year. It’s important to know that the vendor has a track record of innovation, and that you will get new and innovative features. We ask our vendors for their roadmap for future development. While it’s true that the current market leader may not always be the best in class, there is now, and will likely always be, a big gap between the best and the next best in AI capabilities. It’s worth ensuring that your teams are using the most cutting-edge tools, since the space is moving fast, and using sub-par tools may cause you to be left behind, especially if the costs to switch are high.
One of the somewhat dangerous things that happens with these application programming interface (API)-based models is that they can change under the hood and not have to tell you. It’s essential that you choose a vendor that is transparent about its AI models and the algorithms used, and that you ask them to notify you if the API changes or the model has been updated in a way that will impact your use cases — because we’ve found that something will work well on Friday and much worse on Monday. (If you, like us, rely on your models being consistent, it’s worth building automated tests for the output, even if there is some variance expected.)
How ready is my team for AI?
Another consideration: Will the people whose skills are required to train and use the AI be the right people to adapt to it? For many applied use cases, you will need a lot of expertise to be able to load into the system, so it’s important to determine if people have the willingness and energy to learn how to integrate it into their workflow. In some cases, an earlier-stage employee is a better fit to adapt with the tool. For example, a senior director of marketing may be more set in their ways than a recent-grad content generator. You don’t want to spend a bunch of money and time on tools that the end user won’t build into their workflows.
Getting the answers to these questions requires a lot of research, and incorporating AI tools into your organization has significant costs, both literal and figurative. So it is essential to both evaluate them with an eye toward liability, transparency and regulation and to continue the evaluation as both your organization and the vendor evolve. But this is only the beginning of the AI journey, and the companies that win will be the ones that effectively build these capabilities into their toolkit and culture.
Final note
If you’re looking to increase AI usage in your organization, here are a few things that are working for us at Syndio:
Showcasing different ways people across the organization are using AI in an impactful way. Seeing colleagues in your own function identifying specific applied use cases goes a long way towards getting buy-in and communicating value. We do this as part of our all-company meetings, spotlighting a few folks to give a demo each time.
Creating opportunities for tutoring and hands-on training. I hold AI office hours every week, when colleagues drop by to learn more about the tools available to us, get specific suggestions for optimizing their processes with AI and iterate on prompts together to improve the outputs. We’re also starting “AI Days” where we intentionally create space for people to learn the tools and showcase what they learned.
Building prototypes to get people started. We love NotebookLM at Syndio, and it’s been a great way to help our teams understand how AI can create efficiencies for cross-functional collaboration and information organization. I created a few Notebooks for projects that I’m on with other stakeholders and showed them how to use the tool to draft enablement materials like FAQs and onboarding guides, as well as highlighting potential risks and recurring themes. Now that they have experienced the value, those colleagues are creating their own Notebooks for other projects they are on. CustomGPTs are another great way to socialize use cases and empower folks on the team to see new opportunities for AI in their own functions.
Finally, find the AI allies in the organization. Not all leaders are excited about the opportunities that AI brings, but in my experience, there are usually a few who are interested and optimistic about the impact that AI can have on an organization. If you can find those senior leadership sponsors to help create momentum, it can go a long way toward unblocking new tools and modeling an attitude of excitement and innovation about AI rather than fear.