Posted on

How Central Texas officials are using it, and how they want to regulate it

How Central Texas officials are using it, and how they want to regulate it

When the Texas legislature returns to Austin in January, one bipartisan priority a number of lawmakers have pointed to is regulating artificial intelligence. A number of Texas agencies are already using it.

Austin has been put at the forefront of much of the conversation: the University of Texas at Austin is one of the leading AI research institutes; as a hub for technological development, several organizations in Austin are developing or implementing AI; and as the Capital City of one of the largest economies in the world, how government agencies, both local and statewide, regulate artificial intelligence is critical.

Artificial intelligence, defined

No two artificial intelligence programs are identical, although virtually all applications of AI involve a level of coding, algorithms, and a massive input of information. Though artificial intelligence has largely been bolstered in the public consciousness over the last year, its research origins stretch back to the mid-twentieth century. Since then, its popularity as a concept has waxed and waned.

“We call them sort of ‘AI winters’,” Peter Stone, an artificial intelligence researcher at UT Austin, told CBS Austin. “There’s sort of been a cyclical nature of this field over the last 75 years, where sometimes there’s a lot of hype, and the general public is paying a lot of attention to it, thinks it’s going to solve all the world’s problems. And then there are times when the people have found that the expectations were overblown: it didn’t live up to the promises, and then sort of all the attention goes away for some time.”

Despite the relatively lengthy research history, artificial intelligence is difficult to define, given the topic’s blend of science, ethics, and philosophy. Similarly, depending on how you define “intelligence,” an argument could be made that AI-based technology is even more prevalent than programs like ChatGPT.

“AI is everywhere,” Matthew Murrell, a law professor at UT Austin, said. “Now, just in your cell phone alone, we have a map application that can map where you go; social media accounts that tailor content specifically for you that’s driven by AI; GPS tracking; tailored ads. And so just in your pocket, there are four or five different types of AI. Not to mention, self-driving cars on the road and AI used in HR decisions and everywhere else.”

AI in Austin

Across the city, state, and country, government agencies are evaluating exactly how they can and will implement artificial intelligence in their operations.

“The City doesn’t have any AI systems that interact directly with the public now,” Daniel Culotta, the City of Austin’s Chief Innovation Officer, told CBS Austin in a roundtable discussion, “but we have been doing some research and some pilots to figure out how to better operate our systems and our operations so that we can serve the community better.” That includes processes like wildfire direction, emergency communications, and transportation.

“The City is not just finding all the places we could apply AI, we’re looking for the best places that provide the highest value to our residents,” Austin Mayor Kirk Watson wrote in a recent newsletter. “We’re doing internal pilots of the generative AI tools you hear so much about (like ChatGPT) to see where they can make us faster, better, and more efficient. But we’re also exploring specialized use cases that are on the cutting edge of AI applications in cities.”

Watson went on to note, “all of our programs are in the pilot and study phase. None have replaced current systems, and all have human oversight, which means they can inform but do not make decisions.”

Outside of Austin, some law enforcement agencies are beginning to implement AI-based strategies. Police departments in cities like Oklahoma City, Fort Collins, Colorado, and Lafayette, Indiana, are using artificial intelligence to write up police reports, raising questions of legality in the court submission process, according to an Associated Press investigation.

In San Antonio, the Police Department informed the City Council it was beginning to pilot a program involving security cameras that would be monitored round-the-clock to detect crime in real time. Police officers would monitor as often as possible, but, in the event officers are unavailable, artificial intelligence could monitor the cameras.

In a statement, a spokesperson for the Austin Police Department told CBS Austin, “The Austin Police is looking to be involved in larger conversations centered around AI with our City leadership. The Austin Police Department is weighing the pros and cons related to different aspects of the work we do from an investigative standpoint. With technology evolving quickly, it would make sense to explore how AI could benefit us as [a] law enforcement agency to make us more efficient and effective. Any use would need to be evaluated as part of a Citywide approval process.”

In February, the Austin City Council approved a resolution, co-authored by Council Member Vanessa Fuentes, that adopted ethical guidelines and procedures for the city government to use artificial intelligence. It called for innovation and collaboration, data privacy and security, transparency, ease of explanation, validity, and reduction of biases in its uses.

“Having this type of technology that essentially is doing massive surveillance on individuals who are not suspected of a crime, we have to have the proper guardrails in place,” Fuentes said. “We have seen countless examples throughout our history of where massive surveillance has been used to have a disproportionate impact on Black and Brown communities and marginalized communities. So we have to be vigilant on how we’re utilizing this type of technology.”

Texas’ state police, the Department of Public Safety, though, is in the process of implementing artificial intelligence. According to documents obtained by CBS Austin through an open records request, DPS entered a partnership with a company called PenLink, which uses artificial intelligence to scrape the Internet for users’ information to be used in investigations.

DPS did not return CBS Austin’s request for comment, particularly regarding whether or not the use of AI would include “geofencing,” a process by which investigators can trace a user’s location using data from the Internet.

Regulating artificial intelligence

In the last legislative session, Texas lawmakers laid groundwork for state regulations and codes for artificial intelligence. House Bill 2060, passed largely on a bipartisan basis, created an Artificial Intelligence Advisory Council, tasked with “study[ing] and monitor[ing] artificial intelligence systems developed, employed, or procured by state agencies.”

Currently on the Council are one Texas State Representative Giovanni Capriglione, R-Southlake, who authored the bill; Senator Tan Parker, R-Flower Mound; Amanda Crawford, the executive director of the Texas Department of Information Resources and Chief Information Officer of the state; John Bash, founder of the Austin office of Quinn Emanuel Urquhart & Sullivan, LLP; Mark Stone, Chief Information Officer of the Texas A&M University System; Dean Teffer,vice president of IronNet Cybersecurity; and Angela Wilkins,Executive Director at Rice University’s Ken Kennedy Institute.

Before the legislature returns to Austin in January 2025, the AI Advisory Council is expected to produce a report that acts as a sort of ethical code, Senator Parkertold CBS Austin in an interview. That report will then be used as a framework for legislation in the next session.

“It all starts with this, I think, ethical code of conduct that we will pass, an ethical standard,” Senator Parker said. “And from there, we can talk about all the other specifics, going after bad actors, talking about specific industries where we have concerns, or clusters of activity that concern us. But it all starts with that framework about an ethical code of conduct and, again, addressing protection, if you will, for the vulnerable in our society.”

Throughout the year, the committee has heard testimony from state agencies on how they are already implementing AI. For example, the Texas Department of Transportation is piloting a program that would use artificial intelligence to monitor traffic patterns and roadway incidents, with the goal of statewide implementation.

Other departments, such as the Department of Health and Human Services, stressed balancing AI innovations with a need for privacy. Currently, DHHS only uses artificial intelligence in the form of chat bots but said in a presentation to lawmakers they are looking to potentially use AI in the future to reduce manual data entry, among other things.

“We are keenly focused on being deliberate in our strategy, because our priority is to protect and safeguard all the sensitive health data that we have,” Jennifer Buaas, an associate commissioner at DHHS, said in an advisory council hearing.

AI in Austin Roundtable

The following is an excerpt from a roundtable discussion CBS Austin hosted, alongside stakeholders in the artificial intelligence industry. It has been edited lightly for length and clarity.

WALT MACIBORSKI, CBS Austin: Where are we in terms of artificial intelligence? How robust is it, how is it working, where are we with the technology?

PETER VOSS, CEO of Aigo:So over the last few years, there’s been tremendous growth and progress in the field of AI. And of course, particularly in the last two years, we’ve now had ChatGPT generative AI, and it just does phenomenal things. However, on the trajectory to real AI to AGI [artificial general intelligence], to true human level AI, we really, in a way, are on the wrong path. In fact, the Chief Scientist for Meta, for Facebook, Yann LeCun, recently said that large language models [LLMs] are an off-ramp to AGI, a distraction, a dead end. So why is that? Well, these large language models now consume an insane amount of power, and they cannot learn interactively. So people are talking about building models that may cost billions of dollars, take months to to create, you’ve used them for a few months, and then you throw them away because they cannot learn interactively. So it’s really on the on the wrong path at the moment.

WM: I want to ask you about those security safeguards. A viewer who wishes to remain anonymous asks, ‘How do you allow AI to foster positive experiences while instituting safeguards and privacy, when the entire premise of AI is based on machine learning?’

ABIGAIL MAINES, CRO of HiddenLayer:I think that’s a great question, and there are lots of different ways to do that. We take an automated tool approach. You can start with employee or personal training. When we first had to roll out new technologies, we said, ‘Do you know what you’re doing? You’re putting confidential data in this, you’re not allowed to put confidential data in this’. So training is step one. Step two would be, there are definitely guardrails in the foundational LLM. So when you’re thinking about things like ChatGPT, Azure, OpenAI, those are all sort of big foundational models that we’re all interacting with, whether we know it or not, they have guardrails built into them. And then, of course, companies like HiddenLayer exist to provide additional efficacy on top of those guardrails. We are seeing attacks, we’re definitely seeing adversarial activity. Adversaries go after attractive targets, which would include things like banking and the financial sector, highly regulated organizations like the federal government, both in the United States and globally. So that’s generally where we’re seeing the most sort of prevalent adversarial activity today.

WM:A question from a viewer named James, ‘Are we protecting people’s job security from being replaced by artificial intelligence, and how do we make sure people don’t get left behind as our city continues to grow and integrate this technology?’

DANIEL CULOTTA, Chief Innovation Officer for the City of Austin: It’s a great question. I think improving AI literacy and making sure people have the training they need to use these tools effectively across many different types of jobs and sectors is really key. We do want to make sure at the City organization that we’re considering these as tools that people use, not replacements for people. So, we’re really devoted to training our large workforce, making resources available for the community, making sure that everyone has equitable access to these tools that they can bring forward in their jobs.

MICHAEL ADKISON, CBS Austin: Another question from a viewer that we received was specifically about the role that the state government is going to play. That viewer asked “Exactly, how is it that the legislature is going to protect me from artificial intelligence?’ What would you say to that?

MIHAELA PLESA, Texas State Representative, (D) Plano:Well, I was really honored to have sponsored a couple of bills last session that were passed into law, one specifically dealing with children and deepfakes in pornography. And so we’re trying to create spaces to keep people safe. We have seen with the advancements of artificial intelligence that people are using people’s images without their knowledge, their voice without their knowledge. And so we want to create legislation that is protecting people and letting people know that they can’t do those types of things. There’s also a space in the legislature to create invisible watermarks, so that when you are creating AI-generated videos or imagery, there’s going to be an invisible watermark there to let people know this was created using AI, this was not a human journalist that sat down and wrote this, or a videographer that created this image.