AI Engineer

TRUSS EMPLOYMENT OPPORTUNITy

AI Engineer for Hire

Locations

SHARE THIS:
ABOUT THE EMPLOYER

Our client is a Revenue Performance Agency that empowers B2B companies to scale smarter by aligning strategy with execution, leveraging AI, data, and expertise. Unlike traditional consulting firms, they proactively implement strategies, partnering with revenue teams to achieve measurable outcomes. With a focus on Revenue Operations, Go-to-Market Strategy, Revenue Enablement, and AI-driven solutions, they eliminate silos and modernizes sales processes. Trusted by startups and enterprises alike, the company uses Generative AI and AI Automation to optimize productivity, streamline revenue operations, and drive sustainable growth.

WHAT WILL YOU WORK ON?

The purpose of this role is to handle all high-complexity technical work – developing bespoke software components, integrating AI solutions deeply into client environments, and optimizing deployments (e.g., on Azure, GCP Vertex AI, etc.). In essence, the AI Engineer ensures that the AI solutions are not just smart, but also well-integrated, robust, and production-ready in enterprise contexts.

Key Responsibilities

  • Custom Integration Development: Write low code (ex/python) to connect AI assistants with systems or data sources when a custom approach is required. This could mean developing a microservice or script to interface with a proprietary database, creating a new API endpoint to expose data for an AI agent, or building a plug-in/connector for a platform that lacks one. The Engineer uses languages like Python (or Node/Java depending on the stack) to build these integrations, ensuring they are secure and efficient.

  • Database & Data Pipeline Integration: Develop and manage data flows that support AI solutions. This might involve writing data extraction and transformation jobs to feed data into an AI model (like syncing CRM data to a vector store such as Pinecone or an Azure Cognitive Search index). The Engineer ensures data needed for AI is available and updated as required. They also handle writing results back to databases or data warehouses if the AI solution needs to log outputs or enable analytics.

  • Performance Optimization: Analyze and improve the performance of AI workflows. This includes optimizing response times of AI agents (perhaps by implementing caching layers, batching calls, or fine-tuning model parameters), ensuring that integrations handle high volumes (multi-threading or async processing for concurrency), and monitoring system metrics (CPU, memory, API throughput) to preemptively scale resources. The Engineer sets up logging and monitoring for the technical components, so issues like slowdowns or errors can be detected and addressed.

  • Security & Compliance Implementation: Ensure that all custom components and deployments adhere to security best practices and any client-specific compliance requirements. For instance, handling API keys and credentials securely (using vaults or key management services), ensuring data in transit and at rest is encrypted, implementing user authentication/authorization for any custom APIs, and complying with data privacy rules (not logging sensitive data from AI interactions, etc.). The Engineer often interfaces with the client’s IT/security teams to get approvals (e.g., having code security reviewed, or meeting penetration testing requirements).

  • Technical Troubleshooting & Support: Act as the highest escalation point for technical issues. If an AI integration is failing or an assistant is behaving unexpectedly due to technical reasons, the Engineer dives into logs, traces through code, and pinpoints the root cause. They fix bugs in custom code, handle incidents (like a service outage affecting an AI component), and ensure restoration of service. In the course of support, they may also implement improvements to prevent future issues (building more robust error handling, or adding redundancy).

  • Technical Strategy & Tooling: Advise on and implement tools/frameworks to improve the practice’s tech stack. For example, evaluating whether to use a framework like LangChain for managing prompts and context, deciding on the best vector database solution, or setting up CI/CD pipelines for code deployments. The Engineer contributes to the technical architecture decisions of the practice and might build internal utilities (scripts, libraries) that speed up development for the team (like a template for calling the OpenAI API with certain retry logic, which Builders can then use).

WHAT SKILLS AND EXPERIENCE ARE WE LOOKING FOR?

  • 5–10+ years of experience in software development or engineering roles, with a proven track record of building integration-heavy and complex backend systems.

  • Expert knowledge of APIs and System Integration principles, specifically in RESTful API design, webhooks, and secure authentication methods (OAuth, API keys).

  • Experience with AI orchestration tools (ex/ N8N, Co-Pilot Studio, Make, Zapier) 

  • Experience and proficiency with prompt engineering across various LLMs. 

  • Practical experience in the AI/ML domain, including implementing AI APIs, deploying models, and working with relevant frameworks and tools like LangChain and vector databases (e.g., Pinecone).

  • Strong System Architecture and Security mindset, capable of designing scalable, reliable architectures and a commitment to secure coding practices and compliance implementation.

Tools and Platforms Used

  • Programming Languages: Typically Python for scripting, data manipulation, machine learning integration (given its rich AI libraries) – writing scripts, Flask/FastAPI services, etc. Possibly JavaScript/TypeScript for integrations in Node or if building something to run in a cloud function (or if the client stack requires it). Familiarity with SQL for database queries and maybe languages like C# or Java if needed to work with certain enterprise systems (depending on client tech environment).

  • Cloud Platforms: Experience with Azure, Google Cloud, and AWS. Specifically:

    • Azure: Using Azure OpenAI Service, Azure Functions, Azure Logic Apps (though logic apps are low-code, the engineer might script within them), Azure Databricks or Cognitive Search, Azure Kubernetes Service for deploying services, etc.

    • Google Cloud (Vertex AI): Deploying models or using Vertex endpoints, Cloud Functions, AI Platform Pipelines, BigQuery for data.

    • AWS: Possibly using AWS Sagemaker, Lambda, API Gateway, etc., if clients are on AWS.

  • AI/ML Frameworks: Tools like LangChain, MLflow, TensorFlow/PyTorch (if custom models or advanced AI work is in scope), or libraries to handle embeddings (FAISS, etc.). Even if not training models from scratch, the Engineer might use these to manage context or chain LLM calls in a more controlled way than the low-code environment allows.

  • Databases & Data Tools: Knowledge of SQL databases (PostgreSQL, SQL Server, etc.) for any data storage or retrieval needs. Also NoSQL and vector databases like Pinecone, Weaviate, Chroma, or ElasticSearch if dealing with embeddings or unstructured data search. Possibly using Redis for caching AI responses or session data. Familiar with data pipeline tools or writing ETL scripts when needed.

  • DevOps & Version Control: Uses Git/GitHub for version control of any custom code written. Sets up CI/CD pipelines (GitHub Actions, Jenkins, etc.) to deploy code to cloud environments reliably. Familiar with Docker for containerizing applications and Kubernetes if orchestrating complex services. Also comfortable with command-line and infrastructure-as-code (Terraform, CloudFormation) if setting up cloud resources systematically.

  • Testing & Monitoring: Utilizes testing frameworks (PyTest for Python code, etc.) to validate the functionality of custom modules. Sets up logging and monitoring using tools like CloudWatch, Azure Monitor, Stackdriver, or custom dashboards. Might use APM (Application Performance Monitoring) tools to trace how the AI solution performs in real-time. Ensuring that if something fails at 2am, alerts go out (could integrate with pager/ops tools if needed).

  • Security Tools: Works with security scanning tools (for code vulnerabilities, dependency checks) and uses secure coding practices. Possibly uses API management tools (Azure API Management, Apigee, etc.) to securely expose services. Knowledge of OAuth, SAML, or other auth standards to integrate with enterprise security for any custom-built components.

Requires 3-4 hours of overlap with US timezone

Featured benefits:

– Career Growth
– Work with Modern Tech
– Remote-Friendly Flexibility
– International Experience
– Pathway to U.S. Market Opportunities
– Stable, Growing Industry