Indiana tech, privacy chiefs talk benefits, concerns related to state AI usage

Indiana’s government is already using generative artificial intelligence (AI) and plans to expand its use, state technology and data leaders told the Artificial Intelligence Task Force on Wednesday.

The state aims to help Hoosiers find information more easily. But, task force members noted, there are privacy, cybersecurity, contract and cost concerns to tackle along the way.

Rep. Matt Lehman, who co-chairs the task force, opened the meeting with remarks partially sourced from ChatGPT, a generative AI chatbot.

“It is wonderful and it is scary,” Lehman said of ChatGPT’s capabilities.

“There’s a lot of conflict out there,” said Lehman, a Republican from Berne. “So, we have to find that balance between good results that make our economies (and) make our jobs easier and better — but, at the same time, not create more fear and doubt.”

What’s in use

Tracy Barnes, Indiana’s chief information officer, said there are three “known” uses of generative AI across state government.

First, the Department of Workforce Development’s recommendation engine, which suggests training, education and other employment resources to Hoosiers submitting unemployment insurance claims. It went into production in November 2023.

Barnes’ own Indiana Office of Technology runs the second and third.

In 2021, it began production for an internal security log reader that looks for anomalies and abnormalities — like in log-in locations and times — then warns the agency’s security operations center.

“We’re able to get alerts and notifications to make sure that we’re able to identify before a situation gets worse, if something an account has been breached or credentials have been sold or harvested,” Barnes said.

In June, his agency launched a chatbot — in beta — for state government websites. It’s ingested all of the state’s public-facing content and can use it to answer questions.

“The hope is to … let it answer those questions and make navigation and finding the resources and available information a lot quicker, instead of folks having to try and navigate and understand some of the intricacies of what agency does what,” Barnes said.

Sen. Liz Brown, a Republican from Fort Wayne, raised privacy concerns: what if users offer personal, identifying information in conversations with the chatbot?

Barnes said that while his office can’t prevent “oversharing,” it also can’t track people. There’s access to internet protocol addresses and general locations, but not specifics.

“You’re not logging in. We’re not asking for personal details or authentication or ID and password or anything of that nature,” he said. “So we don’t know.”

The state also has some means to cut down on unsanctioned AI usage.

Indiana Chief Privacy Officer Todd Cotterill, of the Management and Performance Hub, described training for state employees, as well as “do’s and don’t’s” guidance in the works.

The state can also monitor employee activities on state devices, and can control some state-related activities on employees’ personal devices, per Barnes. But, it can’t determine when work has been done using AI or validate human work as authentic, he said.

What’s on the way, and worries

Barnes also outlined five more ways the state may use generative AI in the future.

That includes state enterprise-wide user productivity functions, like email summarization and calendar review, as well as AI-generated voice technology to help answer Hoosier questions.

Various agencies are also interested in a range of tools for monitoring video cameras, reviewing documents and translating content.

Those pursuits engendered further concerns.

Brown noted that lawmaker emails and documents aren’t public records under Indiana law. She asked how the state would treat agency interactions with lawmakers in its data storage: “Microsoft Copilot … is not integrating our emails to create a bigger chat system, right?”

Barnes acknowledged her question as a “legitimate concern.”

“Those are the some of the test cases that we need to start working on — which is why we’ve not vetted or validated it, (or) made this available for actual consumption at this point,” he added.

Barnes also outlined his own concerns: losing protected or sensitive data, using data that’s biased or inaccurate, and validating the accuracy of work by external vendors.

His agency wants to beef up its contract terms to know more about vendors’ AI models, data sharing practices, and privacy or security practices.

“And then, if there’s a concern, or we start to see issues where they’re actually releasing data, or something is mistakenly getting put out there, how do we hold them accountable?” Barnes told the Capital Chronicle. “How do we seek retribution to make sure the state’s made whole and we’re protecting our brand?”

Cotterill said stronger terms would also protect the state from sudden company changes: ” Existing vendors are saying, ‘Oh, we have a new AI thing that we’re rolling out, and now you’re in our cloud.’ … We have to, over time, get a better handle on that.

Brown put forth cost as a concern.

“We’ve seen companies spend literally millions and billions, and they’re starting to pull back because they haven’t got their (return on investment),” she said.

Cotterill said that, while the state considers returns in initial risk reviews, it’s too early for a comprehensive analysis of returns.

But Barnes said data storage costs could be “exorbitant.”

He told the Capital Chronicle that state data has grown to about three petabytes. Each petabyte is one million times larger than a gigabyte.

The task force is expected to meet again in September, per Lehman, and could meet once more before the legislative session begins in January.

By Leslie Bonilla MuñizThe Indiana Capital Chronicle is an independent, not-for-profit news organization that covers state government, policy and elections.