

Ampera Technologies
https://amperatech.aiAbout
Company social profiles
Jobs at Ampera Technologies
Job Description:
1. Machine Learning Development & Deployment
· Design and implement supervised and unsupervised models for predictive analytics, including churn prediction, demand forecasting, renewal risk scoring, and cross sell/upsell opportunity identification.
· Translate business problems into ML frameworks and production solutions that improve efficiency, revenue, or customer experience.
· Build, optimize, and maintain ML pipelines using tools such as MLflow, Airflow, or Kubeflow.
2. Cross-Functional ML Use Cases
· Partner with teams across Sales (e.g., lead scoring, next-best action), Customer Service (e.g., case deflection, sentiment analysis), Finance (e.g., revenue forecasting, fraud detection), Supply Chain (e.g., inventory optimization, ETA prediction), and Order Fulfillment (e.g., delivery risk modeling) to define impactful ML use cases.
· Develop domain-specific models and continuously improve them using feedback loops and real-world performance data. 3.
3. Model Governance and MLOps
· Ensure robust model monitoring, versioning, and retraining strategies to keep models reliable in dynamic environments.
· Work closely with DevOps and Data Engineering teams to automate deployment, CI/CD workflows, and cloud-native ML infrastructure (AWS/GCP/Azure).
4. Data Engineering and Feature Architecture
· Collaborate with data engineers to define feature stores, data quality checks, and model-ready datasets on platforms like Snowflake or Databricks.
· Perform feature selection, transformation, and engineering aligned with each domain’s business logic. 5. Communication & Stakeholder Collaboration
· Present technical insights and model results to business and executive stakeholders in a clear, actionable format.
· Work with Product Owners and Program Managers to scope, prioritize, and plan delivery of ML projects.
Qualifications:
Required
• Bachelor’s or Master’s degree in (e.g., Computer Science, Engineering, Statistics, Mathematics)
• 4+ years of experience in machine learning, data science.
• Proficiency in Python, XGBoost, PyTorch, TensorFlow, or similar.
• Experience deploying models into production using ML pipelines and orchestration frameworks.
• Strong understanding of data structures, SQL, and cloud platforms (e.g., AWS SageMaker, Azure ML, or GCP Vertex AI).
• Hands-on experience in implementing machine learning algorithms such as Random Forest, XGBoost, Logistic Regression, and Deep Learning techniques including Neural Networks (ANN, CNN)
Preferred:
• Experience supporting business functions such as Finance, Sales, or Operations with ML use cases.
• Familiarity with MLOps tools (MLflow, SageMaker Pipelines, Feature Store).
• Exposure to enterprise data platforms (e.g., Snowflake, Oracle Fusion, Salesforce).
• Background in statistics, forecasting, optimization, or recommendation systems.
Hi ,
We are looking for Oracle Incentive Compensation & Order Management Techno-Functional Consultant
PFB the Job Description:
Job Title: Oracle Incentive Compensation & Order Management Techno-Functional Consultant
Experience: 5+ Years
Location: Remote
Key Responsibilities
Oracle EBS & Incentive Compensation
- Design, configure, and implement Oracle Incentive Compensation (OIC) solutions within Oracle EBS R12.
- Analyze and integrate sales compensation plans, quota management, and commission calculations.
- Configure Plan Elements, Rate Tables, Compensation Plans, Pay Groups, and Sales Rep structures.
- Develop and enhance commission and incentive reporting frameworks.
Technical Development
- Develop PL/SQL packages, procedures, and functions for enterprise applications.
- Build RICEW components (Reports, Interfaces, Conversions, Extensions, Workflows).
- Create XML Publisher (BI Publisher) reports using RTF and XSLT templates.
- Develop and enhance custom workflows and concurrent programs.
Integration & Cloud Technologies
- Design and implement integration solutions using Oracle Integration Cloud (OIC).
- Develop REST web services using Oracle EBS Integrated SOA Gateway (ISG).
- Build integrations between Oracle EBS and external applications (Salesforce, ERP Cloud, third-party systems).
- Implement inbound and outbound interfaces for enterprise data exchange.
Implementation & Support
- Participate in full lifecycle implementations, upgrades, and system stabilization projects.
- Conduct requirements gathering, PRD preparation, and functional/technical design documentation.
- Perform unit testing, integration testing, and UAT support.
- Provide production support and issue resolution for business-critical applications.
Data Migration & Bulk Data Handling
- Use SQL*Loader, Export/Import utilities, and data loaders (FBDI, HCM DL) for large-scale data migration.
- Manage data conversion from legacy systems to Oracle EBS/Fusion applications.
Technical Skills
ERP & Cloud Platforms
- Oracle E-Business Suite R12
- Oracle Incentive Compensation (OIC / ICM)
- Oracle Fusion ERP Cloud (Finance & SCM)
- Oracle Integration Cloud (OIC)
Development Technologies
- PL/SQL
- SQL
- XML / XSLT
- REST Web Services
Tools & Utilities
- TOAD
- PL/SQL Developer
- SQL*Loader
- Oracle Forms 6i
- Oracle Reports 6i
- BI/XML Publisher
Databases
- Oracle 9i / 10g
Functional Knowledge
- CRM Foundation
- Core HR
- P2P, O2C cycles
- Order Management, Inventory, Purchasing
Job Description:
We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems.
This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production.
You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability.
Key Responsibilities
AI & Agentic Infrastructure
- Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows.
- Build scalable runtime environments for LLM orchestration frameworks.
- Enable deployment of AI copilots, assistants, and autonomous decision systems.
Common frameworks may include:
- LangChain
- LlamaIndex
- AutoGPT
LLMOps & AI Model Lifecycle
Design and manage LLMOps pipelines for the full lifecycle of large language models:
- Model deployment
- Prompt management
- Versioning
- Evaluation and testing
- Model monitoring
Integrate with AI platforms such as:
- Azure Machine Learning
- Amazon SageMaker
- Vertex AI
Retrieval-Augmented Generation (RAG) Infrastructure
Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs.
Responsibilities include:
- Document ingestion pipelines
- Embedding generation workflows
- Knowledge indexing
- Query orchestration
- Retrieval optimization
- Support scalable semantic search architectures.
Vector Database & Knowledge Infrastructure
Deploy and manage vector databases used for AI applications and semantic retrieval.
Common technologies include:
- Pinecone
- Weaviate
- Milvus
- FAISS
Responsibilities include:
- Index optimization
- Query latency tuning
- Scalable embedding storage
- Hybrid search architecture
Multi-Cloud AI Infrastructure
Design and maintain AI-ready infrastructure across:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
Key responsibilities include:
- GPU infrastructure management
- Distributed training environments
- Hybrid cloud integrations with on-prem data centers
- Infrastructure scaling for AI workloads
Data Platforms & Integration
- Support deployment and optimization of data lakes, data warehouses, and streaming platforms.
- Work with data engineering teams to ensure secure and scalable data infrastructure.
Cloud Architecture & Infrastructure
- Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud.
- Build hybrid cloud architectures integrating on-premise environments with cloud platforms.
- Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads.
DevOps, Platform Engineering & Automation
Build automated cloud infrastructure using modern DevOps practices.
Tools may include:
- Terraform
- Docker
- Kubernetes
- GitHub Actions
Responsibilities include:
- Infrastructure as Code (IaC)
- Automated deployments
- CI/CD pipelines for AI models and services
- Platform reliability and scalability
AI Observability & Monitoring
Implement observability frameworks to monitor AI systems in production.
This includes:
- Model performance monitoring
- Prompt evaluation
- Hallucination detection
- Latency and throughput analysis
- Cost monitoring for LLM usage
Tools may include:
- Arize AI
- WhyLabs
- Weights & Biases
Security, Governance & Responsible AI
Ensure AI systems follow strong governance and security practices.
Responsibilities include:
- Data privacy and compliance
- Model governance frameworks
- Secure model deployment
- Monitoring model bias and drift
- AI risk management
Support enterprise frameworks for Responsible AI and AI compliance.
Data & Security
- Experience with data lake architectures, distributed storage, and ETL pipelines
- Knowledge of data security, encryption, IAM, and compliance frameworks
- Familiarity with AI governance and responsible AI practices
Required Skills
Cloud & Infrastructure
- Strong experience in Azure (must have), AWS or GCP
- Hybrid and multi-cloud architecture
- GPU infrastructure management
DevOps & Automation
- Kubernetes
- Docker
- Terraform
- CI/CD pipelines
AI / ML Platforms
- MLOps pipelines
- Model deployment
- Model monitoring
AI Application Infrastructure
- Vector databases
- RAG pipelines
- LLM orchestration frameworks
Programming
Experience in one or more languages:
- Python
- Go
- Java
- TypeScript
Preferred Qualifications
- Experience building AI copilots or autonomous agents
- Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training
- Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability
- Experience building enterprise AI platforms
Education & Experience
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering
- Experience working in data-driven or AI-focused environments
What Success Looks Like
- Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms
- Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products
- Secure and scalable AI-ready cloud platforms
- Strong automation and governance across cloud and AI systems
Job Description:
We are looking for a skilled Ethical Hacker (Penetration Tester) who will be responsible for identifying vulnerabilities in systems, networks, and applications before malicious hackers can exploit them. The role involves conducting security assessments, penetration testing, and recommending security improvements to strengthen the organization’s cybersecurity posture.
Key Responsibilities
· Conduct penetration testing on web applications, mobile applications, APIs, and networks.
· Identify security vulnerabilities and weaknesses in systems and infrastructure.
· Perform vulnerability assessments using automated tools and manual techniques.
· Simulate cyberattacks to evaluate the effectiveness of existing security measures.
· Prepare detailed security reports highlighting risks, vulnerabilities, and remediation strategies.
· Collaborate with development, DevOps, and IT teams to fix security gaps.
· Ensure compliance with security standards and frameworks such as OWASP, ISO 27001, and NIST.
· Conduct security audits and risk assessments across digital platforms.
· Stay updated on the latest hacking techniques, security vulnerabilities, and cyber threats.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Cybersecurity, Information Technology, or related field.
- 4+ years of experience in ethical hacking, penetration testing, or cybersecurity.
- Strong knowledge of network security, system security, and application security.
- Experience with security tools such as:
- Burp Suite
- Metasploit
- Nmap
- Wireshark
- Kali Linux
- Knowledge of OWASP Top 10 vulnerabilities.
- Understanding of Linux, Windows, and cloud security environments.
- Strong analytical and problem-solving skills.
Preferred Certifications
- CEH (Certified Ethical Hacker)
- OSCP (Offensive Security Certified Professional)
- CompTIA Security+
- CISSP (optional but valuable)
Key Competencies
- Cybersecurity risk assessment
- Vulnerability management
- Penetration testing methodologies
- Incident response awareness
- Strong documentation and reporting skills
Nice to Have
- Experience in cloud security (AWS, Azure, GCP)
Hi ,
Title : Senior AI/ML Engineer
Experience : 5 – 10+ Yrs
Location : Bengaluru
Work Type : Hybrid – 2 days Work from office
Type of hire : PwD & Non-PwD Inclusive Hiring
Employment Type : Full Time
Notice Period : Immediate Joiner
Workdays : Mon - Fri
Role Overview
We are seeking an exceptional AI Engineer who can design and build production-grade AI systems that combine advanced machine learning, Generative AI, and scalable software engineering.
This role goes beyond traditional data science and focuses on building end-to-end AI platforms, autonomous AI agents, intelligent decision systems, and enterprise AI applications.
You will work on real-world enterprise problems across industries, developing AI systems that automate reasoning, prediction, and decision-making at scale.
What You Will Build
Examples of systems you may work on:
• AI Copilots for enterprise workflows
• Autonomous AI agents for automation
• Decision intelligence platforms
• Retrieval-Augmented Generation (RAG) systems
• Predictive ML systems for forecasting and anomaly detection
• AI-powered knowledge assistants
• Intelligent automation platforms
Key Responsibilities
1. Advanced Machine Learning & Predictive Systems
Design and implement ML models including:
• Time series forecasting
• Predictive modeling
• Anomaly detection
• Recommendation systems
• NLP / text intelligence
• Deep learning models
Develop models using:
• PyTorch
• TensorFlow
• Scikit-learn
• XGBoost / LightGBM
2. Generative AI & LLM Systems
Build enterprise-grade GenAI applications including:
• AI copilots
• conversational agents
• document intelligence systems
• enterprise knowledge assistants
Develop LLM systems using:
• OpenAI / Claude / Gemini / Llama
• prompt engineering techniques
• embeddings and semantic search
• RAG architectures
3. Agentic AI Systems
Design autonomous AI systems capable of reasoning and executing tasks.
Build multi-agent architectures using:
• LangGraph
• CrewAI
• AutoGen
• Semantic Kernel
Integrate agents with:
• APIs
• enterprise data systems
• internal workflows
4. AI Platform Engineering
Develop scalable AI services and applications using:
• Python
• FastAPI / Flask
• asynchronous processing
• distributed compute frameworks
Build production-grade APIs and AI services.
5. Enterprise AI Deployment & MLOps
Deploy AI models into scalable production environments.
Work with:
• Docker
• Kubernetes
• CI/CD pipelines
• MLflow / experiment tracking
• model monitoring and drift detection
Deploy AI solutions on:
• Azure
• AWS
• GCP
6. Data Integration & AI Systems
Work with enterprise data sources including:
• relational databases
• data warehouses (Snowflake, Redshift, BigQuery)
• data lakes (S3 / Azure Data Lake)
• vector databases (Pinecone, Weaviate, FAISS)
Required Skills:
Programming
Expert-level proficiency in:
• Python
• software engineering best practices
• data structures and algorithms
Experience building production-ready systems.
Machine Learning
Strong expertise in:
• supervised learning
• unsupervised learning
• deep learning
• time-series modelling
• model evaluation and optimization
Generative AI
Experience working with:
• LLM APIs
• prompt engineering
• RAG pipelines
• embeddings and vector search
AI Architecture
Ability to design:
• scalable AI systems
• distributed ML systems
• intelligent automation platforms
Preferred Experience
• Building enterprise AI products
• Developing AI copilots or agents
• Designing decision intelligence platforms
• Experience with large-scale data systems
Ideal Candidate Profile
The ideal candidate is:
• A strong ML engineer AND software engineer
• Comfortable building AI systems end-to-end
• Experienced in deploying models to production
• Passionate about next-generation AI architectures
We value builders who ship real systems, not just research prototypes.
Education
Bachelor’s / Master’s in:
Computer Science
Artificial Intelligence
Machine Learning
Data Science
or related field.
Why Join Ampera
At Ampera, we are building AI-native enterprise platforms that transform how organizations use data and intelligence.
Engineers at Ampera work on:
• real-world enterprise AI systems
• cutting-edge GenAI and agentic architectures
• global enterprise clients across industries
• high-impact AI platforms that scale.
What Makes This Role Unique
You will help build the next generation of enterprise AI systems — where AI moves beyond prediction and becomes an autonomous decision-making layer for organizations.
About Ampera:
Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
.

About Ampera
Ampera builds enterprise-grade AI platforms that sit at the intersection of large-scale data systems, intelligent orchestration, and applied AI.
Our products help Fortune 1000 companies optimize operations, manage risk, and unlock decision intelligence using AI, LLMs, and agentic workflows.
Role Overview
We are looking for a strong Full Stack Engineer who has built production-grade enterprise applications, worked with large datasets, and is excited about integrating AI systems and LLM-powered workflows into real-world platforms.
This is not a UI-only or API-only role.
You’ll own end-to-end system design, from frontend experiences to backend orchestration and AI integration.
What You’ll Work On
- Enterprise web platforms used by analysts, admins, and leadership
- High-scale data-intensive applications (query orchestration, risk intelligence, estimation engines)
- AI-augmented workflows (LLMs, agents, optimizers, explainability layers)
- Secure, governed, multi-tenant systems with role-based access
Key Responsibilities
Full Stack Development
- Design and build scalable web applications (frontend + backend)
- Develop API-first backend services for enterprise workflows
- Build admin dashboards, analyst workflows, and decision-ready UIs
- Ensure performance, reliability, and maintainability at scale
Backend & Data Systems
- Work with large-scale relational databases (Azure Synapse, Redshift, Snowflake, PostgreSQL, SQL Server, etc.)
- Design data models for high-volume, analytical workloads
- Implement caching, orchestration, background processing, and async workflows
- Integrate with enterprise systems (identity, BI tools, data platforms)
AI / LLM Integration
- Integrate LLMs (OpenAI, Azure OpenAI, etc.) into production systems
- Build AI-powered services:
- Query optimization
- Risk scoring & explainability
- Estimation & prediction workflows
- Implement agentic AI patterns (multi-step reasoning, tool-using agents, orchestration)
- Work closely with ML engineers to productionize AI models
Enterprise Readiness
- Implement RBAC, audit logging, governance, and guardrails
- Design systems with security, compliance, and observability in mind
- Contribute to CI/CD pipelines, deployments, and production monitoring
Required Skills & Experience
Core Engineering
- 4–8+ years of experience as a Full Stack Engineer
- Strong backend experience with Python (FastAPI preferred) or equivalent
- Solid frontend experience with React / modern JS frameworks
- Strong understanding of REST APIs, async processing, microservices
Data & Systems
- Hands-on experience with large databases and complex schemas
- Strong SQL skills and experience optimizing queries
- Experience building enterprise-grade, data-heavy applications
AI / Modern Stack
- Practical experience integrating LLMs or AI services into applications
- Familiarity with:
- Prompting & structured outputs
- AI evaluation & guardrails
- Explainability and risk-aware AI
- Exposure to agentic AI frameworks or multi-step AI workflows is a big plus
Nice-to-Have (Strong Differentiators)
- Experience building AI-powered SaaS or internal enterprise platforms
- Background in analytics, risk systems, finance, supply chain, or operations
- Experience with Redis, message queues, background workers
- Familiarity with Azure / AWS, containerization, Kubernetes
- Ability to translate business problems → system design
What We Look For (Beyond Skills)
- Strong systems thinking — you see the whole picture
- Comfortable operating in ambiguous, zero-to-one builds
- Ability to reason about scale, cost, and enterprise constraints
- Builder mindset — you ship, iterate, and own outcomes
Why Join Ampera
- Work on real enterprise AI platforms, not demos or chatbots
- Exposure to LLMs, agentic AI, and applied AI at scale
- High ownership, senior-heavy team, minimal bureaucracy
- Opportunity to shape core architecture and AI strategy
Hi,
Title : Accessibility Testing
Experience : 4 to 5 Yrs
Location : Chennai
Type of hire : Person with Disabilities (PWD) and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Working hours : 09:00 a.m. to 06:00 p.m.
Work Days : Mon - Fri
Job Description:
We are looking for an experienced Accessibility Tester with 4–5 years of hands-on experience in web and mobile accessibility testing. The ideal candidate should have strong expertise in WCAG compliance, assistive technology testing, functional validation, and usability evaluation to ensure digital products deliver inclusive and accessible user experiences
Key Responsibilities
- Perform comprehensive accessibility testing for web and mobile applications.
- Evaluate applications against WCAG 2.1 / 2.2 (AA level), ADA, and other relevant accessibility standards.
- Conduct manual and automated accessibility testing using industry-standard tools.
- Perform assistive technology testing using screen readers (NVDA, JAWS, VoiceOver, TalkBack).
- Validate keyboard-only navigation, focus order, and focus visibility.
- Conduct functional validation to ensure business and accessibility requirements are met.
- Execute usability testing focusing on users with visual, auditory, motor, and cognitive disabilities.
- Identify, document, and track accessibility defects with clear reproduction steps, WCAG references, severity classification, and remediation guidance.
- Prepare detailed accessibility audit reports and support compliance documentation
- Collaborate with developers, designers, product teams, and QA to resolve accessibility issues.
- Validate fixes and perform regression accessibility testing.
- Participate in accessibility reviews during design and development phases.
Technical Skills Required
Accessibility Expertise
- Strong knowledge of WCAG 2.1 / 2.2 guidelines and success criteria mapping.
- Good understanding of ARIA roles, semantic HTML, accessible forms, dynamic components, and focus management.
- Experience testing across different disability types (visual, auditory, motor, cognitive).
- Knowledge of Section 508 and other accessibility regulations (preferred).
Testing Skills
- Strong experience in functional testing and validation.
- Hands-on experience in usability testing methodologies.
- Experience preparing structured defect reports and accessibility audit documentation.
- Ability to classify issues based on severity (Critical / Major / Minor).
Tools & Technologies
· Experience with accessibility testing tools such as Axe, WAVE, Lighthouse.
· Hands-on experience with screen readers (NVDA, JAWS, VoiceOver, TalkBack).
· Experience testing mobile accessibility using platform-native accessibility tools on iOS and Android.
· Basic understanding of HTML, CSS, and JavaScript
Preferred Qualifications
- IAAP Certification (CPACC / WAS / CPWA) – Good to Have.
- Experience working in Agile / Scrum environments.
- Experience in enterprise-level accessibility audits.
- Exposure to mobile accessibility testing on iOS and Android platforms.
Key Competencies
- Strong attention to detail.
- Analytical and problem-solving skills.
- Ability to interpret accessibility standards and apply them practically.
- Effective communication skills for client and stakeholder interactions.
- Passion for inclusive design and digital accessibility.
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Thanks,
Kavitha Udhay

Hi,
We are seeking a senior data leader with deep functional expertise in Salesforce Sales and Service domains to own the enterprise data model, metrics, and analytical outcomes supporting Sales, Service, and Customer Operations.
This role is business‑first and data‑centric. The successful candidate understands how Salesforce Sales Cloud and Service Cloud data is generated, evolves over time, and is consumed by business teams, and ensures analytics accurately reflect operational reality.
Snowflake serves as the enterprise analytics platform, but Salesforce domain mastery and functional data expertise are the primary requirements for success in this role.
Core Responsibilities
Salesforce Sales & Service Data Ownership
· Act as the data owner and architect for Salesforce Sales and Service domains.
- Own Sales data including leads, accounts, opportunities, pipeline, bookings, revenue, forecasting, and CPQ (if applicable).
- Own Service data including cases, case lifecycle, SLAs, backlog, escalations, and service performance metrics.
- Define and govern enterprise‑wide KPI and metric definitions across Sales and Service.
- Ensure alignment between Salesforce operational definitions and analytics/reporting outputs.
- Own cross‑functional metrics spanning Sales, Service, and the customer lifecycle (e.g., customer health, renewals, churn).
Business‑Driven Data Modeling
· Design Salesforce‑centric analytical data models that accurately reflect Sales and Service processes.
- Model sales stage progression, pipeline history, and forecast changes over time.
- Model service case lifecycle, SLA compliance, backlog aging, and resolution metrics.
- Handle Salesforce‑specific complexities such as slowly changing dimensions (ownership, territory, account hierarchies).
- Ensure data models support operational dashboards, executive reporting, and advanced analytics.
Analytics Enablement & Business Partnership
· Partner closely with Sales Operations, Service Operations, Revenue Operations, Finance, and Analytics teams.
- Translate business questions into trusted, reusable analytical datasets.
- Identify data quality issues or Salesforce process gaps impacting reporting and drive remediation.
- Enable self‑service analytics through well‑documented, certified data products.
Technical Responsibilities (Enabling Focus)
· Architect and govern Salesforce data ingestion and modeling on Snowflake.
- Guide ELT/ETL strategies for Salesforce objects such as Opportunities, Accounts, Activities, Cases, and Entitlements.
- Ensure reconciliation and auditability between Salesforce, Finance, and analytics layers.
- Define data access, security, and governance aligned with Salesforce usage patterns.
- Partner with data engineering teams on scalability, performance, and cost efficiency.
Required Experience & Skills
Salesforce Sales & Service Domain Expertise (Must‑Have)
· Extensive hands‑on experience working with Salesforce Sales Cloud and Service Cloud data.
- Strong understanding of sales pipeline management, forecasting, and revenue reporting.
- Strong understanding of service case workflows, SLAs, backlog management, and service performance measurement.
- Experience working directly with Sales Operations and Service Operations teams.
- Ability to identify when Salesforce configuration or process issues cause reporting inconsistencies.
Data & Analytics Expertise
· 10+ years working with business‑critical analytical data.
- Proven experience defining KPIs, metrics, and semantic models for Sales and Service domains.
- Strong SQL and analytical skills to validate business logic and data outcomes.
- Experience supporting BI and analytics platforms such as Tableau, Power BI, or MicroStrategy.
Platform Experience
· Experience using Snowflake as an enterprise analytics platform.
- Understanding of modern ELT/ETL and cloud data architecture concepts.
- Familiarity with data governance, lineage, and access control best practices.
Leadership & Collaboration
· Acts as a bridge between business stakeholders and technical teams.
- Comfortable challenging requirements using business and data context.
- Mentors engineers and analysts on Salesforce data nuances and business meaning.
- Strong communicator able to explain complex Salesforce data behavior to non‑technical leaders.
Thanks,
Ampera Talent Team
Hi,
Greetings from Ampera!
we are looking for a Data Scientist with strong Python & Forecasting experience.
Title : Data Scientist – Python & Forecasting
Experience : 4 to 7 Yrs
Location : Chennai/Bengaluru
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Working hours : 09:00 a.m. to 06:00 p.m.
Workdays : Mon - Fri
Job Description:
We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.
Key Responsibilities
- Develop and implement forecasting models (time-series and machine learning based).
- Perform exploratory data analysis (EDA), feature engineering, and model validation.
- Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
- Design, train, validate, and optimize machine learning models for real-world business use cases.
- Apply appropriate ML algorithms based on business problems and data characteristics
- Write clean, modular, and production-ready Python code.
- Work extensively with Python Packages & libraries for data processing and modelling.
- Collaborate with Data Engineers and stakeholders to deploy models into production.
- Monitor model performance and improve accuracy through continuous tuning.
- Document methodologies, assumptions, and results clearly for business teams.
Technical Skills Required:
Programming
- Strong proficiency in Python
- Experience with Pandas, NumPy, Scikit-learn
Forecasting & Modelling
- Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
- Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
- Understanding of seasonality, trend decomposition, and statistical modeling
Data & Deployment
- Experience handling structured and large datasets
- SQL proficiency
- Exposure to model deployment (API-based deployment preferred)
- Knowledge of MLOps concepts is an added advantage
Tools (Preferred)
- TensorFlow / PyTorch (optional)
- Airflow / MLflow
- Cloud platforms (AWS / Azure / GCP)
Educational Qualification
- Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.
Key Competencies
- Strong analytical and problem-solving skills
- Ability to communicate insights to technical and non-technical stakeholders
- Experience working in agile or fast-paced environments
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Hi,
PFB the Job Description for Data Science with ML
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Work Days : Mon - Fri
About Ampera:
Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards
About the Role
We are looking for a skilled Data Scientist with strong Machine Learning experience to design, develop, and deploy data-driven solutions. The role involves working with large datasets, building predictive and ML models, and collaborating with cross-functional teams to translate business problems into analytical solutions.
Key Responsibilities
- Analyze large, structured and unstructured datasets to derive actionable insights.
- Design, build, validate, and deploy Machine Learning models for prediction, classification, recommendation, and optimization.
- Apply statistical analysis, feature engineering, and model evaluation techniques.
- Work closely with business stakeholders to understand requirements and convert them into data science solutions.
- Develop end-to-end ML pipelines including data preprocessing, model training, testing, and deployment.
- Monitor model performance and retrain models as required.
- Document assumptions, methodologies, and results clearly.
- Collaborate with data engineers and software teams to integrate models into production systems.
- Stay updated with the latest advancements in data science and machine learning.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in computer science, Data Science, Statistics, Mathematics, or related fields.
- 5+ years of hands-on experience in Data Science and Machine Learning.
- Strong proficiency in Python (NumPy, Pandas, Scikit-learn).
- Experience with ML algorithms:
- Regression, Classification, Clustering
- Decision Trees, Random Forest, Gradient Boosting
- SVM, KNN, Naïve Bayes
- Solid understanding of statistics, probability, and linear algebra.
- Experience with data visualization tools (Matplotlib, Seaborn, Power BI, Tableau – preferred).
- Experience working with SQL and relational databases.
- Knowledge of model evaluation metrics and optimization techniques.
Preferred / Good to Have
- Experience with Deep Learning frameworks (TensorFlow, PyTorch, Keras).
- Exposure to NLP, Computer Vision, or Time Series forecasting.
- Experience with big data technologies (Spark, Hadoop).
- Familiarity with cloud platforms (AWS, Azure, GCP).
- Experience with MLOps, CI/CD pipelines, and model deployment.
Soft Skills
- Strong analytical and problem-solving abilities.
- Excellent communication and stakeholder interaction skills.
- Ability to work independently and in cross-functional teams.
- Curiosity and willingness to learn new tools and techniques.
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
.
Similar companies
About the company
MyOperator is India's cloud communications leader trusted by 10000+ businesses. MyOperator offers an omni-channel SAAS platform with:
- Cloud Call Center/ Contact Center Software
- WhatsApp API
- IVR and Toll-free Number
- Multi-store telephony
- Enterprise Mobility
MyOperator has been adopted by 10000+ businesses including IRCTC, Razorpay, Amazon, PwC, E&Y, Apollo and more.
MyOperator has been rated as a champion in India's cloud communications segment(Software Suggest), awarded for ease of use by Capterra and for exceptional customer service at UBS India BPO Conclave.
In 2022, MyOperator launched SMB focused conversation app Heyo Phone backed by super-angels Amit Chaudhary (Lenskart), Aakash Chaudhry(Aakash-Byjus)
Jobs
55
About the company
At TheBlueOwls, we are passionate about harnessing the power of data analytics and artificial intelligence to transform businesses and drive innovation. With a team of experts and cutting-edge technology, we help our clients unlock valuable insights from their data and leverage AI solutions to stay ahead in today's competitive landscape.
Our Founder
Our company was founded by Puran Ticku, an ex-Microsoft Architect with over 20+ years of experience in the field of data and digital health. Puran Ticku has a deep understanding of the potential of data analytics and AI and has successfully led transformative solutions for numerous organizations.
Our Expertise
We specialize in providing comprehensive data analytics services, helping businesses make data-driven decisions and uncover hidden patterns and trends. With our advanced AI capabilities, we enable our clients to automate processes, enhance productivity, and gain a competitive edge in their industries.
Our Approach
At TheBlueOwls, we believe that the key to successful data analytics and AI implementation lies in a holistic approach. We work closely with our clients to understand their unique challenges and goals, and tailor our solutions to meet their specific needs. Our team of skilled professionals utilizes state-of-the-art technology and industry best practices to deliver exceptional results.
Our Commitment
We are committed to delivering the highest level of quality and value to our clients. We strive for excellence in every project we undertake, ensuring that our solutions are not only effective, but also scalable and sustainable. With our deep domain expertise and customer-centric approach, we are dedicated to driving success for our clients and helping them achieve their business objectives.
Contact us today to learn more about how TheBlueOwls can empower your organization with data analytics and AI solutions that drive growth and innovation.
Jobs
5
About the company
Recruiting Bond is a global leader in Recruitment Process Outsourcing (RPO), Executive Search, Headhunting, Talent Mapping, and Workforce Consulting. Founded by Pavan B, we are on a mission to power businesses through transformative talent strategies that scale teams, accelerate innovation, and unlock human potential.
With a presence across 25+ industries—from IT, Healthcare, and FinTech to Gaming, BioTech, and Web3—we specialize in hiring that drives outcomes. Our domain expertise spans high-growth startups to Fortune 500 companies, delivering elite CXO and leadership talent, strategic workforce solutions, and inclusive hiring at scale.
We help businesses:
✔️ Hire the right leaders and builders
✔️ Scale globally with speed and precision
✔️ Build talent-first roadmaps from MVP to IPO
Whether you're launching, scaling, or transforming—Recruiting Bond is your strategic partner in talent.
🔹 Industries: Technology | Healthcare | FinTech | Retail | Manufacturing | EdTech | Crypto | Real Estate | Web3 | Logistics | Energy & more
🔹 Services: Executive Hiring | RPO | Talent Strategy | Workforce Design | Startup Consulting | Diversity Recruitment
📨 Let’s build the future—together: https://recruitingbond.c
Jobs
6
About the company
Jobs
14
About the company
At Lynx Technologies LLC, we are redefining how software, AI, and digital products are designed, developed, and scaled. As a next-generation software development and technology partner, we specialize in building cutting-edge digital solutions for ambitious businesses worldwide.
We combine human creativity, AI automation, and advanced technology to empower startups, enterprises, and entrepreneurs to turn their ideas into reality — faster, smarter, and more efficiently than ever.
Whether it's building custom AI-powered platforms, next-gen websites, mobile apps, enterprise-grade software, or scalable digital marketing solutions — Lynx Technologies delivers with precision, innovation, and future-ready designs.
Our mission is simple: We build software for the world of tomorrow.
Jobs
4
About the company
BPO Hirings
Jobs
11



