Our CEO, Tushar Banerji, was joined by Mimi Kantor from the Executive Office of Technology Services and Security (EOTSS) to discuss safe and controlled AI chatbots in the public sector.
The Lunch and Learn webinar was hosted by Government Technology, view the summary and recording below.
Dr. Craig Orgeron, Ph.D. is a senior fellow for the Center for Digital Government and Professor of MIS at Millsaps College. He is the moderator for this discussion on safe and controlled AI chatbots for the public sector.
Implementing generative AI and LLM technologies out-of-the-box can carry some inherent risks. For example, it can produce misinformation from training data, which may contain inaccuracies or biases. These can manifest in generated content, reinforcing stereotypes or reflecting inaccuracies.
Another concern may be application and data security as well as privacy. Chatbots interact with constituents, who may inadvertently input personal data.
Striking the right balance between leveraging the efficiency and innovation of generative AI technology while safeguarding against data inaccuracies, bias-related issues, and ensuring robust data privacy protection is an ongoing and multifaceted challenge for government organizations. Solving this problem will lead to safe and controlled AI chatbots in the public sector.
Transparency is a fundamental principle in NeuroSoph’s responsible public sector AI chatbot deployments. We are fully transparent on how our AI chatbot technology works and how data is used so we can communicate this clearly with our government partners and the public.
One of the key aspects of transparency is establishing accountability for our AI chatbots. We keep humans in the loop to provide oversight for continuous improvements, error checking, accessibility issues, ethical implications, adaptability, and controlling chatbot behavior. By fully understanding chatbot behavior and decision-making processes, we can help mitigate inaccurate and biased responses.
All content data from our chatbot is 100% accurate as it is reviewed by subject matter experts or has detailed policy reviews. Additionally, protection of people’s data is our main priority. We work closely with the government on developing data safeguarding techniques – any personal data that is inadvertently collected from the chatbot is removed from the database.
AI chatbots offer a promising avenue to make technology more accessible to everyone, regardless of disabilities or technological expertise.
Some accessibility features in our AI chatbots include:
By incorporating accessibility features and adhering to universal design principles and WCAG guidelines, AI chatbots have the potential to bridge the digital divide. This helps make digital services and information accessible to a broader audience – ultimately, enhancing inclusivity and ensuring equitable access for all.
The responsible and beneficial deployment of Generative AI or LLM-powered chatbots requires robust collaboration between the tech industry and governments. NeuroSoph and the Executive Office of Technology Services and Security (EOTSS) have collaborated closely for over 3 years to responsibly develop and deploy AI chatbots in Massachusetts.
Transparency and open dialogue are essential. This collaboration is critical for all stakeholders, including the public. A collaborative effort has led to the creation of regulatory framework that encourages responsible AI development – see President Biden’s Executive Order on Safe, Secure and Trustworthy AI.
Policies and frameworks will make the industry better for all by providing sensible guardrails for data privacy, bias mitigation, transparency and fairness – which provide a strong foundation for the entire tech industry to build upon. This will ultimately lead to responsible AI chatbot deployments while nurturing innovation and economic growth.
As Generative AI technology matures, there will likely be wider adoption of generative AI chatbots in the public sector.
We can anticipate AI chatbots to evolve into personalized public agents. These agents will be integrated with different internal systems and automatically perform tasks for the public and government staff, for example, applying for government benefits, renewing licenses, filing taxes, or automatically drafting documents like RFQs, and RFPs.
Additionally, we expect enhanced multilingual capabilities, more personalized preferences provided by data-driven decision-making, improved policy development that better meet people’s needs, and AI chatbots playing a more prominent role in disseminating information with instructions during emergencies.
Overall, generative AI and LLM chatbots have the potential to revolutionize government digital service delivery by making public services more responsive, efficient, and streamlined for constituents. It will provide significant productivity improvements for government workers.
The AI landscape is rapidly evolving, and it is an exciting time in this space!
NeuroSoph works closely with our government partners to ensure responsible development and deployment of chatbots. We use supervised techniques to control chatbot behavior and outcomes. Our chatbots are trustworthy as all our chatbot content is approved and vetted by subject matter experts before released to the public. Additionally, our chatbot is only trained on user conversation data which does not include any personal information.
We have multiple approaches to implement generative AI and LLM-enhanced chatbots that do not generate content on their own. Using NeuroSoph’s proprietary, secure, and cutting-edge Specto AI platform, we help streamline government communication delivering information that is trusted and reliable. We build safe and controlled AI chatbots for the public sector. For more information about our products and services, please contact us today – let’s extend intelligence in your organization.