Search by job, company or skills

H

Senior Full-Stack Engineer (Backend, DevOps Heavy)

7-9 Years
SGD 5,000 - 10,000 per month
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 20 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Mission

HydraX is a MAS-licensed digital capital-markets infrastructure platform - tokenisation, institutional custody, bilateral dealing, and the AGX digital exchange. We are in the middle of a multi-phase transformation to rebuild the way we design, build, and operate software - with AI agents as the default execution layer across the SDLC, and guardrails suitable for a regulated environment.

We are hiring a Senior Full-Stack Engineer who is backend+devops heavy and works AI-native by default. You will own the build of production systems that HydraX runs on - services that integrate with third-party APIs, enforce strict data and audit boundaries, and ship with the kind of test coverage you would want to show a regulator. Some of these systems will be used daily by Ops, Compliance, and Risk some will sit closer to our clients and our regulated offerings. You will work on both.

You will typically:

  • Own and maintain Go-based HTTP services backed by PostgreSQL.

  • Build the thin but polished Next.js dashboards and web surfaces that the people using the system actually use.

  • Wire up conversation-based workflows where that is the best interface for the job.

  • Take a system from spec to staging UAT to production, including CI/CD and deployment onto our shared AWS platform.

You are expected to do most of this by orchestrating AI coding agents (local / background) across the SDLC, rather than typing production code.

Why this role exists

Our team has pushed our engineering specs to a point where a capable engineer, using Cursor (or equivalent) well, can ship a full system (backend frontend tests deployment) in weeks rather than quarters. We need another engineer in the seat to keep the throughput up without dropping our compliance and security bar.

The constraint is no longer typing speed. The constraint is judgement - which part of a build goes to a background agent, which part needs a tight local loop, which failure modes must have named tests before an agent writes a line, and which architectural decisions must stay with a human because the model will otherwise pick the locally-elegant-but-globally-wrong option.

We want someone who can hold that line while still shipping fast.

Working style: AI-Native

This is not a bring Copilot as an autocomplete role. It is also not a you must already run 6 parallel background agents role. We are looking for someone in the middle - clearly moving in the AI-native direction, comfortable being coached on our specific patterns, and allergic to going back to the old way.

Concretely, by the end of your first 60 days, we expect you to be able to:

  1. Use Cursor (or equivalent) as your primary IDE, with local agents handling a meaningful share of day-to-day work - not just autocomplete.

  2. Hand off self-contained components to background agents from a spec we provide, converge them via a test-driven loop, and return a mergeable PR.

  3. Drive QA through agents - generate test matrices from specs, close coverage gaps, generate realistic fixtures, identify weak tests, and run integration suites against real dependencies (containerised PostgreSQL, mocked external services). You treat tests as the holdout set that defines done, not as a post-hoc checkbox.

  4. Recognise the common failure modes of AI code generation - hallucinated APIs, plausible-but-wrong library versions, tautological tests, silent mocking of the thing under test, spec/implementation drift, secrets in diffs - and build small habits around catching each before merge.

  5. Write specs that agents can execute without you hand-holding every step. You will co-author specs with the team lead and, over time, start authoring your own.

If you already work this way, great - you will fit quickly and probably push the team further. If you have strong fundamentals and are moving in this direction but not fully there yet, that is also fine - we will pair you with our most AI-fluent engineers during onboarding and expect you to be fully productive by month two.

What we are not hiring for:

  • Someone who resists agent-driven development on principle.

  • Someone who treats AI tooling as an optional add-on.

  • Someone who wants to hand-write everything because it feels more real.

That working style is incompatible with how the team ships today.

What you will build

The systems we build tend to share a shape, regardless of whether they face Ops, Compliance, Risk, engineering, or our clients:

  • A Go HTTP service (Gin or similar) that integrates with one or more third-party APIs, enforces business rules, writes to a PostgreSQL database designed for auditability, and exposes a versioned REST API documented in OpenAPI.

  • A Next.js web surface - dashboard, portal, or purpose-built UI - that gives its users a clean view of the system's state and the forms and flows they need to get work done.

  • The tests that back all of it - unit tests, integration tests against a real containerised database, end-to-end tests verifying data boundaries and audit properties, and a CI pipeline that runs them on every push.

  • The deployment onto our shared AWS platform - containerised service, registered against the shared platform's Terraform modules (you will read and extend these, not own them single-handedly), wired into secrets, logs, and alarms.

Expect to ship one complete system roughly every 4-6 weeks once ramped, sustainably. That is our current pace, not an aspirational one.

Core responsibilities

  1. Own end-to-end delivery of production systems. Spec → schema → backend → API → optional conversational surface → web UI → tests → CI/CD → staging UAT → production. You coordinate with specialists (infra, compliance, product) where depth matters, but you drive the loop.

  2. Write clear specs before you write code. Prerequisites, acceptance criteria, a named test list, and the compliance or security controls that need to be enforced. This is the single highest-leverage habit we expect you to develop and keep.

  3. Run red/green/refactor TDD strictly. Every new behaviour starts with a failing test that describes it. You orchestrate an agent to write the tests, confirm they fail for the right reason, then make them pass.

  4. Choose local vs background agents deliberately. Exploratory, architecturally-uncertain work → local agent with you in the loop. Self-contained, spec-complete components with a clear test list → background agent on a feature branch, human PR review on the way out.

  5. Drive QA through agents. Generate exhaustive test matrices from specs, close coverage gaps, write property-based and table-driven tests, generate realistic fixtures, and maintain integration suites that run against real dependencies rather than over-mocked ones.

  6. Hold the quality and security line on PRs. Linters and PR review bots catch the mechanical issues - you catch the layered ones. Logic in the wrong layer, concrete dependencies where interfaces belong, over-mocked critical paths, tests that assert on the wrong thing, fixtures that do not match the real upstream contract.

  7. Keep our engineering platform getting cheaper to build on. Promote reusable patterns - web UI scaffolding, audit store patterns, conversational surface helpers, evidence generation - into shared libraries so the third system you build takes less effort than the first.

  8. Document for the next engineer and the next agent. Repo-level AGENTS.md, WORKLOG.md, per-component build prompts, fixtures with provenance. If another engineer or background agent cannot self-onboard from your docs, the docs are incomplete.

Must-have experience

  • 7+ years in software engineering with real production ownership of backend systems, relational databases, HTTP APIs, and the web front-ends on top of them.

  • Strong Go - standard library fluency, concurrency patterns, interface-driven design, table-driven testing. We write a lot of Go and we need you to be strong in it on day one.

  • Strong TypeScript React/Next.js - enough that you can own the front-end portions of a system without needing a dedicated front-end engineer. We do not need specialist-level React we need competent full-stack.

  • PostgreSQL schema design and migration discipline - indexes, constraints, forward-only migrations, role-based access, reasoning about JSONB vs relational trade-offs.

  • TDD experience that is actually TDD - RED phase is real, mocks are explicit, integration tests use real dependencies in containers.

  • HTTP API design - versioned REST, OpenAPI as a source of truth, RBAC middleware, input validation including basic injection defences, rate limiting, structured errors, request-ID propagation.

  • AWS cloud experience in production - you have deployed and operated services on AWS. You are comfortable in the console and via IaC. You know your way around at least: VPC basics, IAM, a container runtime (ECS Fargate, EKS, or equivalent), a load balancer, RDS, Secrets Manager, CloudWatch, and GitHub Actions-based deploys (preferably via OIDC). You can read and extend Terraform modules that someone else wrote.

  • CI/CD ownership - GitHub Actions or equivalent, Docker multi-stage builds, lint test scan gates on every PR.

  • AI-native-leaning working style, as described above - evidenced by recent work, not just claimed.

Good to have (any of these is a plus none are required)

  • Terraform - deeper than I've edited someone else's module. Module design, workspaces, state management, drift handling. This is very helpful because our shared platform is Terraform-based.

  • Compliance and controls experience - MAS / SOC 2 / ISO 27001 / similar. Exposure to audit evidence, access controls, data retention, incident response. You understand that in a regulated environment, evidence is part of the product, not paperwork.

  • Regulated / security-sensitive systems - fintech, payments, healthcare, or equivalent where a security lapse has regulatory consequences.

  • Multi-agent system design - supervisor/worker patterns, composable prompts, RAG knowledge bases. Useful for a subset of our systems but not something you need on day one.

  • Conversational / chat-platform experience - slash commands, modals, message buttons, signed webhooks. Useful but learnable in a week if you are strong elsewhere.

  • Capital markets / tokenisation / digital assets exposure - we will teach you the domain if you are strong on the engineering and AI-native working style.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 146461319

Similar Jobs

Early Applicant