Job Description
LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners.
Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements.


The Activations Back End team is responsible for the bulk of big data processing that powers LiveRamp’s primary activation product, which delivers hundreds of millions in annual recurring revenue. Our systems process over one hundred thousand batch jobs per day, ranging in size from gigabytes to over 100 terabytes, and power distributions to hundreds of downstream destinations. Cumulatively, our systems process multiple exabytes per year. We provide detailed monitoring, statistics, error recovery, and resiliency to keep this core product reliable for our largest customers.
At LiveRamp, big data processing is not just for back-office analytics. Our product is a big data product: we transform, deduplicate, and transport massive datasets across clouds and regions, while respecting complex rate limits and SLAs, and enabling many of the most successful companies on the planet to activate their data safely and efficiently.
You will:
Lead the design and evolution of a petabyte‑scale activation platform, pushing it toward a delta‑first, cache‑aware, and cost‑efficient architecture.
Shape end‑to‑end technical strategy for major areas of Activations Back End (e.g., matching/delta computation, job orchestration, delivery pipelines), from design through rollout and long‑term maintenance.
Architect and build big data pipelines using Apache Spark/Dataproc, SingleStore, Kubernetes/GKE, and streaming systems (e.g., Pub/Sub, Redpanda/Kafka) where appropriate.
Use workflow engines such as Temporal and Cadence to orchestrate complex, long‑running workflows with robust retry, compensation, and observability, and define patterns other engineers can reuse.
Design for multi‑tenant fairness and scalability, ensuring small, latency‑sensitive jobs stay fast while large backfills and bulk workflows do not starve the system, via job classification, queueing, and rate‑limit–aware scheduling.
Drive performance and cost optimization for petabyte‑scale workloads: reduce duplicate processing, improve cache hit rates, tune cluster sizing and autoscaling policies, and set and track SLOs.
Lead production excellence: own critical services in production, coordinate incident response and postmortems, and drive structural fixes that meaningfully reduce operational load and risk.
Infuse AI into how we build and operate: evaluate and adopt AI‑enhanced tooling (for coding, design exploration, data analysis, and operational debugging) and help define best practices.
Mentor and level up other engineers through design/code reviews, pairing, and technical guidance, and represent Activations Back End in cross‑team architecture forums and external venues.
Your team will:
Re‑architect Activations Back End to be cloud‑forward and multi‑regional, taking advantage of SingleStore and other modern data warehouses to replace legacy Spark‑heavy flows where it makes sense.
Deliver high‑throughput, low‑latency activation by combining:
SingleStore‑backed state and delta computation,
Spark/Dataproc for the heaviest batch workloads,
Streaming infrastructure (e.g., Redpanda/Pub/Sub) for event‑driven and incremental deliveries.
Build smarter orchestration and scheduling using Temporal/Cadence and queueing services to:
Classify jobs (latency‑critical vs. throughput‑heavy vs. background),
Route them through multi‑queue schedulers and capacity pools,
Respect destination‑specific rate limits and SLAs.
Eliminate duplicated work through config canonicalization and caching so identical or similar jobs across customers share heavy computation.
Collaborate closely with partner teams (Identity, Data Foundation, Activations Fullstack, Integrations/OPI) to deliver end‑to‑end improvements in activation reliability, speed, and observability, and continuously raise the bar on operational excellence.
About you:
You are excited to adapt to major industry shifts (including the rapid evolution of AI), and approach your work with curiosity, humility, and a strong desire to keep learning.
AI‑enabled development experience, or strong excitement to learn and grow in using AI‑enhanced development tools (e.g., code assistants, agents for log/metrics analysis, AI‑supported design and review) and help others use them effectively.
5+ years of experience writing and deploying high‑quality production code in a modern language (e.g., Java, Go, Scala, or similar), including owning complex systems in production.
Have led the design and delivery of large‑scale distributed or big data systems with clear business impact (e.g., major latency/cost improvements, substantial reliability gains, or large new capabilities).
Strong data engineering and SQL skills: comfortable modeling data, writing and optimizing complex queries on very large tables, and reasoning about performance, correctness, and cost.
Deep experience owning end‑to‑end data pipelines: ingestion, transformation, orchestration, failure handling, and observability, not just individual jobs or microservices.
Comfortable working in a cloud environment (ideally GCP) and with containerized workloads (Kubernetes/GKE or similar); you understand how infra choices impact performance, cost, and reliability.
Able to define and drive technical strategy: break down multi‑quarter problems, evaluate tradeoffs, align stakeholders, and deliver incremental value along the way.
Excellent communication and collaboration skills; you can influence across teams and disciplines and drive consensus on complex technical decisions.
Demonstrated ability to mentor and grow other engineers, give effective feedback, and create space for others to contribute.
Comfortable with ambiguity and deeply inquisitive: you ask “why” and “what if” and convert those questions into concrete experiments and system changes.
Preferred skills (nice to have):
You do not need to have all of these, but experience in several is a strong plus:
Google Cloud Platform (GCP): GCS, Dataproc, GKE, Pub/Sub, BigQuery, IAM.
Workflow orchestration: Temporal or Cadence; Airflow or similar systems for long‑running, failure‑prone workflows.
Big data & warehouses:
Apache Spark (or Dataproc) for large‑scale batch processing,
Experience with data warehouses such as SingleStore, BigQuery, Snowflake, or similar.
Streaming systems: Kafka/Redpanda, Pub/Sub, or equivalent event/streaming platforms, especially for high‑volume or incremental data processing.
Experience designing multi‑tenant systems that enforce rate limits, fairness, and SLAs across many customers and destinations.
Strong background in performance and cost optimization for large‑scale data workloads (e.g., 10–100x speedups, significant compute cost reductions).
Prior experience working on advertising, marketing, or data activation platforms or other systems where data correctness, timeliness, and scale are all critical.
The approximate annual base compensation range is $156,500 to $235,000. The actual offer, reflecting the total compensation package and benefits, will be determined by a number of factors including the applicant's experience, knowledge, skills, and abilities, geography, as well as internal equity among our team.


Benefits:
People: Work with talented, collaborative, and friendly people who love what they do.
Fun: We host in-person and virtual events such as game nights, happy hours, camping trips, and sports leagues.
Work/Life Harmony: Flexible paid time off, paid holidays, options for working from home, and paid parental leave.
Comprehensive Benefits Package: LiveRamp offers a comprehensive benefits package designed to help you be your best self in your personal and professional lives. Our benefits package offers medical, dental, vision, life and disability, an employee assistance program, voluntary benefits as well as perks programs for your healthy lifestyle, career growth and more.
Savings: Our 401K matching plan—1:1 match up to 6% of salary—helps you plan ahead. Also Employee Stock Purchase Plan - 15% discount off purchase price of LiveRamp stock (U.S. LiveRampers)
RampRemote: A comprehensive office equipment and ergonomics program—we provide you with equipment and tools to be your most productive self, no matter where you're located
More about us:
LiveRamp’s mission is to connect data in ways that matter, and doing so starts with our people. We know that inspired teams enlist people from a blend of backgrounds and experiences. And we know that individuals do their best when they not only bring their full selves to work but feel like they truly belong. Connecting LiveRampers to new ideas and one another is one of our guiding principles—one that informs how we hire, train, and grow our global team across nine countries and four continents. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp.
LiveRamp is an affirmative action and equal opportunity employer (AA/EOE/W/M/Vet/Disabled) and does not discriminate in recruiting, hiring, training, promotion or other employment of associates or the awarding of subcontracts because of a person's race, color, sex, age, religion, national origin, protected veteran, disability, sexual orientation, gender identity, genetics or other protected status. Qualified applicants with arrest and conviction records will be considered for the position in accordance with the San Francisco Fair Chance Ordinance.
We use automated decision systems (ADS) as part of our recruitment and hiring process. If you require an accommodation or believe that the use of an ADS may create a barrier to your application or participation in the hiring process due to a disability or other protected characteristic, please let us know. We are committed to providing reasonable accommodations and ensuring an equitable hiring experience for all candidates.
California residents: Please see our California Personnel Privacy Policy for more information regarding how we collect, use, and disclose the personal information you provide during the job application process.
To all recruitment agencies: LiveRamp does not accept agency resumes. Please do not forward resumes to our jobs alias, LiveRamp employees or any other company location. LiveRamp is not responsible for any fees related to unsolicited resumes.