
Similar jobs
Senior Machine Learning Engineer
📍 Location: Remote
💼 Type: Full-Time
💰 Salary: $800 - $1,000 USD / month
Apply at: https://forms.gle/Fwti67UeTEkx2Kkn6
About Us
At Momenta, we're committed to creating a safer digital world by protecting individuals and businesses from voice-based fraud and scams. Through innovative AI technology and community collaboration, we're building a future where communication is secure and trustworthy.
Position Overview
We’re hiring a Senior Machine Learning Engineer with deep expertise in audio signal processing and neural network-based detection. The selected engineer will be responsible for delivering a production-grade, real-time deepfake detection pipeline as part of a time-sensitive, high-stakes 3-month pilot deployment.
Key Responsibilities
📌 Design and Deliver Core Detection Pipeline
Lead the development of a robust, modular deepfake detection pipeline capable of ingesting, processing, and classifying real-time audio streams with high accuracy and low latency. Architect the system to operate under telecom-grade conditions with configurable interfaces and scalable deployment strategies.
📌 Model Strategy, Development, and Optimization
Own the experimentation and refinement of state-of-the-art deep learning models for voice fraud detection. Evaluate multiple model families, benchmark performance across datasets, and strategically select or ensemble models that balance precision, robustness, and compute efficiency for real-world deployment.
📌 Latency-Conscious Production Readiness
Ensure the entire detection stack meets strict performance targets, including sub-20ms inference latency. Apply industry best practices in model compression, preprocessing optimization, and system-level integration to support high-throughput inference on both CPU and GPU environments.
📌 Evaluation Framework and Continuous Testing
Design and implement a comprehensive evaluation suite to validate model accuracy, false positive rates, and environmental robustness. Conduct rigorous testing across domains, including cross-corpus validation, telephony channel effects, adversarial scenarios, and environmental noise conditions.
📌 Deployment Engineering and API Integration
Deliver a fully containerized, production-ready inference service with REST/gRPC endpoints. Build CI/CD pipelines, integration tests, and monitoring hooks to ensure system integrity, traceability, and ease of deployment across environments.
Required Skills & Qualifications
🎯 Technical Skills:
ML Frameworks: PyTorch, TensorFlow, ONNX, OpenVINO, TorchScript
Audio Libraries: Librosa, Torchaudio, FFmpeg
Model Development: CNNs, Transformers, Wav2Vec/WavLM, AASIST, RawNet
Signal Processing: VAD, noise reduction, band-pass filtering, codec simulation
Optimization: Quantization, pruning, GPU acceleration
DevOps: Git, Docker, CI/CD, FastAPI or Flask, REST/gRPC
🎯 Preferred Experience:
Prior work on audio deepfake detection or telephony speech processing
Experience with real-time ML model deployment
Understanding of adversarial robustness and domain adaptation
Familiarity with call center environments or telecom-grade constraints
Compensation & Career Path:
Competitive pay based on experience and capability. ($800 - $1,000 USD / month)
Full-time with potential for conversion to a core team role.
Opportunity to lead future research and production deployments as part of our AI division.
Why Join Momenta?
Solve a global security crisis with cutting-edge AI.
Own a deliverable that will ship into production at scale.
Join a fast-growing team with seasoned founders and engineers.
Fully remote, high-autonomy environment focused on deep work.
🚀 Apply now and help shape the future of voice security
Role
You will develop and maintain the key backend code and infrastructure of the company stack. You will implement AI solutions like LLMs for various tasks such as voice-based interactive systems, chatbots, and AI web apps. Ability to see projects through from start to finish with good organizational skills and attention to detail. This is a perfect role for someone who likes to build state-of-the-art AI products and work with cutting-edge AI technologies like GPT, LLAMA, etc
Qualifications
- BS or MS in Computer Science or relevant field.
- 4+ years experience in backend software development
- Be able to design high-throughput scalable backend systems
- Eagerness to learn applied AI technologies like LLMs, prompt engineering, etc
- Proficiency in Python.
- Experience with cloud computing platforms (AWS, GCP) and technologies like Docker
- Knowledge of Rest APIs, databases (mysql, mongo, vectorDB)
Role Responsibilities:
● Analyze business requirements
● Develop and customize Odoo modules
●
Integrate Odoo with 3rd Party systems
● Troubleshooting
● Share ideas on how to continuously improve the system and way of working
Requirements:
The desired candidate should have below skills:
● Working knowledge of Python with the Odoo-framework (minimum 3 years experience)
● Should be familiar with the latest versions of Odoo.
● Have experience with Object Oriented programming.
● Have knowledge of PostgreSQL.
● Should have experience with Python unit testing.
● Have experience with setup of interfaces between different systems using API’s.
● Should be familiar with Agile and Scrum methodology.
● Have experience with collaboration tools like Git, Buildout, Jira, Confluence, etc
● Have experience with Linux (Ubuntu)
- 2+ years professional writing experience
- Excellent writing and editing skills
- Experience writing content for web/mobile products
- Experience writing emails, landing pages, or editorial content
- Proven ability to collaborate successfully with cross-functional partners
- Ability to work independently in a fast-paced environment.
- Experience writing stories or more long-form editorial content
- Experience with trauma informed writing is a plus
- Experience writing for social impact products or audiences is a plus
- Solid understanding of email marketing and social channels are a plus
- Portfolio of writing samples
Experience: 2+ years
Job Location Flexible: [Work from anywhere in India / Gurgaon based on the need]
Selection process: HR Round followed by Group Discussion and Sales Manager Round.
Qualification: B.com, BBA, MBA, any graduate
Salary Offered: As per industry standard.
About UAI Autoworks Pvt Ltd.
UAI Autoworks is a tech-enabled car servicing, repairing and detailing platform providing 24*7 minor repair services and roadside assistance.
Essential for this position :
- 3-4 years of experience as a manual tester
- Good understanding and experience with Microsoft Azure (or AWS)
- Good understanding and experience with APIs and microservices
- Experience with CI/CD pipelines (ideally Azure DevOps)
Any of the following exp will be preferred :
- Writing automated tests
- TDD and BDD
- Contributing to open source community
- DevOps, monitoring and alerting
- Experience with Health-tech, FHIR, Wearables, IoT, Big data
are passionate about their work and are team players.
● Bookkeeping in Tally – Revenue Invoicing, Collections, collection follow-ups with the
clients, Client ledger reconciliation
● Payment Collection - Follow-ups
● GST preparation and filings
● Analyzing receivables, steps towards better DSO
● Monitoring revenue across business units
● Make sure there are no revenue leakage
● Bank entries, Collection, Invoicing, Duelist preparation, GST list
● Handling large customer base & high-velocity orders like a B2C online business
● Contracts handling, handling the client onboarding processes
● Dealing with the Internal and Statutory Auditors for revenue matters
● Dealing with the bankers for funds inwards from overseas clients
● Exposure around fund management, hedging, foreign currencies
● Experience - 4 to 8 years
● Can handle the team of 3-4 peoples
● Education - BCom, Mom, MBA, CA intern - with the relevant experience
Time zone - India time
Who will you work with?
You will work with a top-notch Finance team.
What can you look for?
A wholesome opportunity in a fast-paced environment will enable you to juggle between concepts yet maintain the quality of content, interact, share your ideas, and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, San Francisco, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. We help our clients engage and motivate their employees, sales teams, channel partners, or
consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we
will wait for a reasonable time of around 3-5 days before we screen the collected applications
and start lining up job discussions with the hiring manager. However, we assure you that we will
attempt to maintain a reasonable time window for successfully closing this requirement. The
candidates will be kept informed and updated on the feedback and application status.
Hiring for GCP compliant cloud data lake solutions for clinical trials for US based pharmaceutical company.
Summary
This is a key position within Data Sciences and Systems organization responsible for data systems and related technologies. The role will part of Amazon Web service (AWS) Data Lake strategy, roadmap, and AWS architecture for data systems and technologies.
Essential/Primary Duties, Functions and Responsibilities
The essential duties and responsibilities of this position are as follows:
- Collaborate with data science and systems leaders and other stakeholders to roadmap, structure, prioritize and execute on AWS data engineering requirements.
- Works closely with the IT organization and other functions to make sure that business needs and requirements, IT processes, and regulatory compliance requirements are met.
- Build the AWS infrastructure required for optimal extraction, transformation, and loading of data from a vendor site clinical data sources using AWS big data technologies
- Create and maintain optimal AWS data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product
- Work with data and analytics experts to strive for greater functionality in our data systems
Requirements
- A minimum of a bachelors degree in a Computer Science, Mathematics, Statistics or related discipline is required. A Master's degree is preferred. A minimum of 6-8 years technical management experience is required. Equivalent experience may be accepted.
- Experience with data lake and/or data warehouse implementation is required
- Minimum Bachelors Degree in Computer Science, Computer Engineering, Mathematical Engineering, Information Systems or related fields
- Project experience with visualization tools (AWS, Tableau, R Studio, PowerBI, R shiny, D3js) and databases. Experience with python, R or SAS coding is a big plus.
- Experience with AWS based S3, Lambda, Step functions.
- Strong team player and you can work effectively in a collaborative, fast-paced, multi-tasking environment
- Solid analytical and technical skill and the ability to exchange innovative ideas
- Quick learner and passionate about continuously developing your skills and knowledge
- Ability to solve problems by using AWS in data acquisitions
- Ability to work in an interdisciplinary environment. You are able to interpret and translate very abstract and technical approaches into a healthcare and business-relevant solution









