InterVision Boosts AI Development with AWS LLM League and Amazon SageMaker

Enhancing Citizen Engagement: InterVision’s Journey with Generative AI in Municipal Services

Understanding the AWS LLM League

The Power of Fine-Tuning for Business Solutions

Fine-Tuning with SageMaker Studio and SageMaker Jumpstart

Empowering InterVision’s AI Capabilities

Conclusion

About the Authors

Transforming Citizen Engagement: The Role of Generative AI in Local Government Services

In an era where citizen expectations are at an all-time high, cities and local governments are on a quest to enhance their non-emergency services. Recognizing that intelligent, scalable contact center solutions are pivotal in improving citizen experiences, organizations are turning to innovative technologies. One such leader in this transformation is InterVision Systems, LLC, an AWS Premier Tier Services Partner and Amazon Connect Service Delivery Partner. Their groundbreaking contact center solution, ConnectIV CX for Community Engagement, is designed specifically for city and county services, streamlining municipal service delivery through AI-powered automation and omnichannel engagement.

The Next Frontier: Generative AI

While InterVision’s existing solution already makes significant strides in improving service delivery, they identified an opportunity to further enhance their offerings with advanced generative AI capabilities. By leveraging the AWS LLM League program, InterVision accelerated their generative AI development for non-emergency (311) contact centers. This initiative marked a strategic milestone in democratizing machine learning (ML), enabling partners to build practical generative AI solutions tailored to their customers’ needs.

The AWS LLM League: A Game-Changer

The AWS LLM League represents a novel approach to democratizing ML through gamified enablement. This program empowers a diverse range of roles—from solutions architects to sales teams—to successfully fine-tune and deploy generative AI models without requiring deep data science expertise. Initially launched as larger multi-organization events, the program has evolved to offer focused single-partner engagements that align directly with specific business objectives, allowing for customization around real-world use cases.

The program follows a three-stage format designed to build practical generative AI capabilities:

Hands-On Workshop: Participants learn the fundamentals of fine-tuning large language models (LLMs) using Amazon SageMaker JumpStart.

Model Development Phase: Participants iterate through multiple fine-tuning approaches, submitting their models to a dynamic leaderboard evaluated by an AI system against specific benchmarks.

Interactive Finale: The program culminates in a live game show where top-performing participants showcase their models through real-time challenges, evaluated by an expert panel, AI benchmarks, and audience participation.

The Power of Fine-Tuning for Business Solutions

Fine-tuning an LLM is a form of transfer learning that allows organizations to train a pre-trained model on a new dataset without starting from scratch. This process can yield accurate models with smaller datasets and less training time. While larger foundation models (FMs) offer impressive general capabilities, fine-tuning smaller models for specific domains often results in exceptional outcomes at a lower cost.

For instance, a fine-tuned 3B parameter model can outperform larger 70B parameter models in specialized tasks while requiring significantly fewer computational resources. This approach aligns with recent industry trends, such as DeepSeek’s success in creating more efficient models through knowledge distillation techniques.

In InterVision’s case, the AWS LLM League program was tailored around their ConnectIV CX solution for community engagement services. Fine-tuning enables precise handling of municipality-specific procedures and responses aligned with local government protocols, resulting in reduced operational costs and faster inference times for improved customer experiences.

Streamlined Fine-Tuning with SageMaker Studio

The solution centers on SageMaker JumpStart within Amazon SageMaker Studio, a web-based integrated development environment (IDE) for ML. This platform allows practitioners to build, train, debug, deploy, and monitor their ML models in a low-code/no-code environment.

The fine-tuning process involves several steps:

Select a Model: Choose from pre-trained, publicly available FMs for various problem types.

Provide a Training Dataset: Use Amazon S3 for virtually limitless storage capacity.

Perform Fine-Tuning: Customize hyperparameters and initiate the fine-tuning job.

Deploy the Model: Access the model in SageMaker Studio and deploy it for inference.

Evaluate and Iterate: Use Amazon SageMaker Clarify to assess model accuracy and identify areas for improvement.

This streamlined approach significantly reduces the complexity of developing and deploying specialized AI models while maintaining high performance standards and cost-efficiency.

Empowering InterVision’s AI Capabilities

The AWS LLM League engagement provided InterVision with a practical pathway to enhance their AI capabilities while addressing specific customer needs. Participants could immediately apply their learning to solve real business challenges, compressing their AI development cycle significantly.

Brent Lazarenko, Head of Technology and Innovation at InterVision, remarked, “This experience was a true acceleration point for us. We didn’t just experiment with AI—we compressed months of R&D into real-world impact. Now, our customers aren’t asking ‘what if?’ anymore, they’re asking ‘what’s next?’”

Using the knowledge gained through the program, InterVision has enhanced their technical discussions with customers about generative AI implementation. They developed an internal virtual assistant using Amazon Bedrock, incorporating custom models and multi-agent collaboration, serving as a proof of concept for similar customer solutions.

Conclusion

The AWS LLM League program exemplifies how gamified enablement can accelerate partners’ AI capabilities while driving tangible business outcomes. Through this focused engagement, InterVision not only enhanced their technical capabilities in fine-tuning language models but also accelerated the development of practical AI solutions for their ConnectIV CX environment.

As organizations continue to explore generative AI implementations, the ability to efficiently develop and deploy specialized models becomes increasingly critical. The AWS LLM League provides a structured pathway for partners and customers to build these capabilities, whether enhancing existing solutions or developing new AI-powered services.

For more insights on implementing generative AI solutions, visit the AWS Machine Learning blog for stories about partners and customers across various industries.

By embracing innovative technologies like generative AI, local governments can not only meet but exceed citizen expectations, paving the way for a more engaged and responsive community.

LEAVE A REPLY

Please enter your comment!
Please enter your name here