Hỗn hợp
Địa điểm: Thành phố Hồ Chí Minh
Thương lượng
Top lý do bạn yêu thích làm việc ở đây
This role is for a hands-on Conversational AI Engineer focused on building and operating agentic,
intentless, LLM-based conversational systems in production. You will design, implement, and run
LLM-powered conversational agents that reason, call tools/APIs, orchestrate workflows, and
collaborate with human agents across voice and chat channels. The administration includes to a
limited extend maintaining static intent taxonomies, but most of the times you will work with prompting,
tooling, policies, and memory to steer behaviour and ensure reliability, safety, and business value.
You will typically work with agent frameworks and orchestration layers (e.g. agentic runtimes, workflow
engines, tools/functions), enterprise systems (ITSM, CRM, HR, identity, knowledge), and channels
such as MS Teams, web chat, email, and AI Voice in CCaaS
Main Responsibilities
Design and build agentic, intent less conversational experiences
• Design end-to-end user journeys where conversational agents act as first - line workers:
diagnosing problems, asking clarifying questions, calling tools, and completing tasks.
• Build agentic conversational flows using LLMs with tools/functions, policies, and orchestration
logic instead of rigid intent trees.
• Craft and maintain system prompts, role prompts, and conversational policies that define how
agents act, escalate, and interact with users and human agents.
• Configure tooling / function calling for the agents (e.g. “reset password”, “create incident in
ITSM”, “check device health”, “update ticket with summary”) and define how the agent
sequences multiple tools in a workflow.
• Design conversational behaviours for clarification, uncertainty, and safety: how the agent asks
follow-up questions, when it refuses, when it hedges, and when it escalates.
• Implement RAG-style retrieval workflows (using vector search, enterprise search, or knowledge
APIs) so the agent can ground responses in up-to-date, governed knowledge rather than
hallucinating.
• For voice use cases, work with TTS/STT and telephony/CCaaS platforms so that agentic
behaviour works equally well over phone calls (including barge-in, call control, and handoffs to
agents).
Integrate AI agents with enterprise systems, tools, and data
• Define and implement tool schemas and APIs for the agents: ITSM actions, CRM updates, HR
workflows, identity operations, self-service actions, diagnostics, and automation triggers.
• Integrate with systems such as ServiceNow (system of record), Salesforce / other CRMs, HR
platforms, identity providers, end point management systems, and others enterprise systems
and platforms involved in the automations.
• Implement secure authentication and authorisation patterns (SSO, OAuth, delegated access)
so agents can act on behalf of users within clearly defined scopes.
• Work with integration/platform teams to create robust, reusable tool backends with proper error
handling, timeouts, retries, and rate limiting.
• Ensure non-functional requirements for tools are respected: latency budgets for conversational
interactions, resilience patterns (fallbacks if a tool fails), and observability of tool calls.
• In environments with multiple agents, help define which agent owns which tools and how they
collaborate or delegate to each other.
Own day-to-day operations and continuous improvement
• Monitor live conversations, tool usage, and outcomes via logs, dashboards, and transcripts:
containment/deflection, task completion, escalation patterns, CSAT, impact on handling time
and backlog.
• Review conversations regularly to identify failure modes: hallucinations, wrong tool selection,
unclear or overconfident answers, excessive back-andforth, or unnecessary escalations.
• Adjust prompts, tools, policies, and orchestration logic to improve reliability and performance
rather than endlessly tweaking intents.
• Maintain evaluation sets and scenarios (e.g. “golden conversations” and test cases) and use
them to regression-test changes to prompts, tools, or model versions.
• Run experiments and A/B tests: different prompts, different models or model configurations,
different tool strategies, and different handoff rules – always tied to business KPIs.
• Manage environments and releases: dev/test/prod setups, controlled rollouts of agent changes,
model updates, and prompt changes, with clear roll-back plans.
• Respond to operational incidents (e.g. a model change causing unexpected behaviour, a tool
outage, a sudden spike in errors) and coordinate with infra, platform, and vendor teams.
• Keep runbooks and documentation up to date so the wider support organisation knows how to
monitor, troubleshoot, and escalate agentic AI issues.
Ensure safety, governance, and compliance for LLM-based conversations
• Implement and maintain safety layers, policies, and filters: content filtering, PII redaction where
required, access control, and constraints on tools (what an agent can or cannot do).
• Work with security, risk, and compliance teams to ensure data handling, logging, and retention
comply with policies and regulations (e.g. GDPR, internal data governance).
• Contribute to or help define governance processes for conversational and agentic AI: approval
workflows for new tools, review processes for new high - impact use cases, criteria for model
upgrades.
• Ensure that prompt content, knowledge sources, and tools are curated and governed, so the
system remains maintainable and auditable as it grows.
• Promote and enforce accessibility and inclusivity in conversational experiences (e.g. clear
language, multilingual support where appropriate, accessible web widgets, voice options, etc.).
Skills, Knowledge & Experience
• Experience building and operating LLM-driven conversational assistants or agents in a
production environment (internal or external users).
• Practical, hands-on work with major LLM platforms (e.g. OpenAI / Azure OpenAI, Anthropic, or
similar) and conversational orchestration frameworks (agent runtimes, tools/functions, or multiagent frameworks).
• Strong understanding of prompt engineering, tools/function calling, and agent design: how to
structure system prompts, design tool schemas, handle ambiguity, and control behaviour via
policies.
• Solid software engineering skills in at least one language commonly used with LLMs and API
integrations (typically Python or JavaScript/TypeScript, others welcomed).
• Experience designing and consuming REST APIs and event-driven integrations, working with
JSON, webhooks, queues, and messaging systems.
• Familiarity with enterprise platforms that conversational agents commonly interact with: ITSM
(ServiceNow), CRMs (Salesforce, HubSpot), HR, identity platforms, end point management,
other.
• Familiarity with contact centre / CCaaS platforms and strong understanding of telephony
concepts (SIP, call routing, queues, IVR flows)
Nice to have
• Experience with multi-agent systems, workflow engines, or agent orchestrators (e.g.
frameworks that coordinate multiple agents, tools, and policies for complex tasks).
• Familiarity with LLM evaluation and testing methodologies: scenario-based evals, automated
scoring, human-in-the-loop review processes.
• Experience with data and analytics tools (e.g. building dashboards that show impact and
performance of AI agents).
• Background in service management, customer support, particularly IT support, giving you
intuition about real-world workflows and constraints.
• Exposure to MLOps / LLMOps practices, especially around managing model versions,
monitoring behaviour, and enforcing policies at scale.