Plan workspace architectures; implement SSO/SCIM integration, role-based access controls (RBAC), and policy guardrails.
Establish usage analytics, define custom GPT governance (actions, connectors, approval workflows), and create runbooks and SOPs.
Design scalable patterns for Assistants with multi-tool orchestration, structured JSON outputs, function/tool calling, Files/Batch for bulk operations, and Moderations and Embeddings for retrieval workflows.
Implement OpenAI-centric RAG pipelines (chunking, embeddings, indexing) with groundedness checks, prompt test suites, red-teaming, and defined cost/performance SLOs.
Own end-to-end fine-tuning processes — dataset curation, training and evaluation, bias and quality checks, rollback/versioning, and telemetry for tuned models.
Implement OpenAI Observability and OpenTelemetry for performance monitoring; add token/cost telemetry, retries/backoff, idempotency, and feature flags/canary deployments.
Document runbooks, troubleshooting guides, and SOPs for production support.
Design across Azure, AWS, or GCP, applying Managed Identities, Secret Management, and Private Link/private endpoints to meet enterprise controls.
Utilize OpenAI Agent SDK, LangChain / LangGraph, or Semantic Kernel pragmatically; integrate Azure Cognitive Search, Redis/pgvector, or managed vector services.
Integrate with Copilot Studio and Microsoft Graph; embed assistants into Teams, SharePoint, or line-of-business applications where appropriate.
Enforce modern delivery practices with CI/CD pipelines (GitHub Actions, Azure DevOps) and IaC (Terraform, Bicep).
Support multi-region rollout strategies and environment automation.
Requirements
Graduate in Diploma and specialized in Information Technology/Computer Science/Engineering/Mathematics or relevant.
Minimum 7 years in software or solution architecture, with strong programming skills in Python and at least one of Java or TypeScript.