DataArt’s AI Ready Program Sets as a Target a State of Affairs where AI is the New Norm

0
552
Yuri-Gubin,-Chief-Innovation-Officer-at-DataArt

Intro remark. Founded in 1997, DataArt is a global software engineering firm that has continually evolved to become the trusted technology partner of market leaders. Through our 20+ domain-specific Labs dedicated to R&D and strategic innovation, we work together with our clients as partners for progress to ensure they stay on the leading edge. AI Lab is one of the most prominent and actively working groups, and AI is a strategic direction for the company.

Approaching the challenge. The release of ChatGPT by OpenAI last year changed the perception of the complexity of AI. It became something that is two clicks away, demonstrating outstanding capabilities and promising all possible benefits to all industries. Although we’ve been following generative AI and underlying architecture for quite some time, this breakthrough set up the challenge for the company: ‘How does it affect our work with clients? And how does it affect our work as a company?’ DataArt appeared to be in a position when we ourselves became a client, a large organization with unlimited opportunities to transform operations, marketing, and technology.

The response. AI Ready program.

Years ago, when DevOps was an emerging discipline, we anticipated that every project, regardless of stack, state, and specifics, would have a DevOps pillar. Now – every Solution Architect, Developer, and QA understands that there is CI/CD. Platform engineering, various release strategies, reference architectures, and accelerators were developed to support the adoption of DevOps, and this is a norm and in the mainstream.

With the perception and breakthrough in the Generative AI field – we have developed a DataArt AI Ready program that sets as a target a state of affairs where AI is the new norm. Every solution, every architecture, every product, and every project at DataArt is designed and done considering AI as a first-class citizen. You don’t have an EV, but your house is built with a place for an EV charger. Even if the project is not leveraging AI right now, the interface and runway are there.

The program includes multiple short-, mid-, and long-term (crawl-walk-run) objectives. It is based on the following pillars:

  • Outreach primarily focuses on how we work with our existing and new clients. The company made significant progress in developing joint offers and programs with our partners, solutions, and technology accelerators.
  • Broad technical skills. This means that not only engineers involved in AI should have sufficient skills, but the company goes through a systematic upskill so that everyone has some vetted expertise appropriate to the role and function. We talk about DevOps, SRE, Data, and Analytics Labs, but we also equip BA, QA, and PM communities with AI capabilities. The program doesn’t stop on engineering and impacts HR, Recruitment, Compliance, IT, etc. – these departments that run the company. After three weeks of ideation, we have discovered more than 50 use cases coming from almost every department in the company.
  • AI domain expertise. The AI Lab itself now sets a much higher bar for itself. How deep can we go into prompt engineering, foundational models, and reinforced learning? Are we ready when clients ask us to train and deploy a new LLM? The company invested in R&D to address these questions and ensure that something is going on at a given time that delivers bleeding-edge technology results and creates value for our clients and the company itself.
  • Subject matter expertise. We deliver real value to clients because of our deep knowledge of the industries in which they operate. It is crucial to be a good engineer and understand domain-specific use cases and details.

Details of the solution. Private AI Platform and scalable AI.

To support the aspirational vision of the AI Ready platform, we needed to implement a platform that would allow to scale implementation of AI PoCs and use cases for the company. The platform is cloud-native and addresses crucial considerations:

  • Access control and authorization
  • Logging and compliance
  • Rate limiting
  • Model parametrization and fine-tuning
  • Interfaces
  • API integrations
  • Data integrations
  • Shared core with use case extensions and plugin models
  • Risk mitigation

Risk mitigation was crucial because the platform should have tools to ensure data privacy and prompt engineering techniques to mitigate ethics and maintain the desired quality of generated content.

Conclusion

By embracing the challenge and developing a program and a platform to support it, we found ourselves in a situation where we can show examples of our own experience as examples of transformations that can be possible with AI. One of the tangible examples is a Helpdesk Support Engine that now leverages the GPT model to answer questions. A chatbot handles 60-70% of L1 requests. Confluence-hosted corporate information is easily discoverable via Teams integrations, and the Helpdesk workload is handled by a substantially smaller team, allowing one to work on more complex projects.

Since this is just the beginning, we expect more tangible outcomes from implementing department-specific use cases.

LEAVE A REPLY

Please enter your comment!
Please enter your name here