
Full Stack Engineer (Mid–Senior) – Enterprise AI Platforms
About Ampera
Ampera builds enterprise-grade AI platforms that sit at the intersection of large-scale data systems, intelligent orchestration, and applied AI.
Our products help Fortune 1000 companies optimize operations, manage risk, and unlock decision intelligence using AI, LLMs, and agentic workflows.
Role Overview
We are looking for a strong Full Stack Engineer who has built production-grade enterprise applications, worked with large datasets, and is excited about integrating AI systems and LLM-powered workflows into real-world platforms.
This is not a UI-only or API-only role.
You’ll own end-to-end system design, from frontend experiences to backend orchestration and AI integration.
What You’ll Work On
- Enterprise web platforms used by analysts, admins, and leadership
- High-scale data-intensive applications (query orchestration, risk intelligence, estimation engines)
- AI-augmented workflows (LLMs, agents, optimizers, explainability layers)
- Secure, governed, multi-tenant systems with role-based access
Key Responsibilities
Full Stack Development
- Design and build scalable web applications (frontend + backend)
- Develop API-first backend services for enterprise workflows
- Build admin dashboards, analyst workflows, and decision-ready UIs
- Ensure performance, reliability, and maintainability at scale
Backend & Data Systems
- Work with large-scale relational databases (Azure Synapse, Redshift, Snowflake, PostgreSQL, SQL Server, etc.)
- Design data models for high-volume, analytical workloads
- Implement caching, orchestration, background processing, and async workflows
- Integrate with enterprise systems (identity, BI tools, data platforms)
AI / LLM Integration
- Integrate LLMs (OpenAI, Azure OpenAI, etc.) into production systems
- Build AI-powered services:
- Query optimization
- Risk scoring & explainability
- Estimation & prediction workflows
- Implement agentic AI patterns (multi-step reasoning, tool-using agents, orchestration)
- Work closely with ML engineers to productionize AI models
Enterprise Readiness
- Implement RBAC, audit logging, governance, and guardrails
- Design systems with security, compliance, and observability in mind
- Contribute to CI/CD pipelines, deployments, and production monitoring
Required Skills & Experience
Core Engineering
- 4–8+ years of experience as a Full Stack Engineer
- Strong backend experience with Python (FastAPI preferred) or equivalent
- Solid frontend experience with React / modern JS frameworks
- Strong understanding of REST APIs, async processing, microservices
Data & Systems
- Hands-on experience with large databases and complex schemas
- Strong SQL skills and experience optimizing queries
- Experience building enterprise-grade, data-heavy applications
AI / Modern Stack
- Practical experience integrating LLMs or AI services into applications
- Familiarity with:
- Prompting & structured outputs
- AI evaluation & guardrails
- Explainability and risk-aware AI
- Exposure to agentic AI frameworks or multi-step AI workflows is a big plus
Nice-to-Have (Strong Differentiators)
- Experience building AI-powered SaaS or internal enterprise platforms
- Background in analytics, risk systems, finance, supply chain, or operations
- Experience with Redis, message queues, background workers
- Familiarity with Azure / AWS, containerization, Kubernetes
- Ability to translate business problems → system design
What We Look For (Beyond Skills)
- Strong systems thinking — you see the whole picture
- Comfortable operating in ambiguous, zero-to-one builds
- Ability to reason about scale, cost, and enterprise constraints
- Builder mindset — you ship, iterate, and own outcomes
Why Join Ampera
- Work on real enterprise AI platforms, not demos or chatbots
- Exposure to LLMs, agentic AI, and applied AI at scale
- High ownership, senior-heavy team, minimal bureaucracy
- Opportunity to shape core architecture and AI strategy

About Ampera Technologies
About
Company social profiles
Similar jobs
About The Company
The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide.
The Role
Power BI front-end developer in the Data Domain (Manufacturing, Sales & Marketing, Purchasing, Logistics, …).Responsible for the Power BI front-end design, development, and delivery of highly visible data-driven applications in the Compressor Technique. You always take a quality-first approach where you ensure the data is visualized in a clear, accurate, and user-friendly manner. You always ensure standards and best practices are followed and ensure documentation is created and maintained. Where needed, you take initiative and make
recommendations to drive improvements. In this role you will also be involved in the tracking, monitoring and performance analysis
of production issues and the implementation of bugfixes and enhancements.
Skills & Experience
• The ideal candidate has a degree in Computer Science, Information Technology or equal through experience.
• Strong knowledge on BI development principles, time intelligence, functions, dimensional modeling and data visualization is required.
• Advanced knowledge and 5-10 years experience with professional BI development & data visualization is preferred.
• You are familiar with data warehouse concepts.
• Knowledge on MS Azure (data lake, databricks, SQL) is considered as a plus.
• Experience and knowledge on scripting languages such as PowerShell and Python to setup and automate Power BI platform related activities is an asset.
• Good knowledge (oral and written) of English is required.
Configuration, administration, customization, and maintenance of Okta CIAM environments.
• Design and maintain configuration manuals and documentation required to sustain the Okta CIAM platform.
• Review Okta platform configurations to ensure the solution is optimized and secure for business needs.
• Support and resolve system incidents, problems, and changes.
Requirements
• 3+ years of hands-on experience with designing and building Okta solution platforms.
• At least one Okta certification in last 2 years - Okta Certified Administrator, Certified Consultant or Certified Developer certification
• Strong understanding of Single Sign On and relevant standards (OIDC, OAuth, SAML)
• 1+ year of development experience using RESTful APIs in any programming language
• Strong communication and documentation skills
• Ability to collaborate and interact productively with team members and key stakeholders.
Handle customer complaints,
Provide appropriate solutions and alternatives within the time limits; follow up to ensure resolution.
Keep records of customer interactions,
Process customer accounts and file documents.
Follow communication procedures, guidelines and policies.
NP – Immediate to 60 Days
Work location – Cisco Manesar Office
Experience – 2 to 10 Years.
- Strong coding Experience in programming language like Python , Java, C
- Experience of Yang data modelling
- Experience on REST/ SOAP API#s
- Experience of frameworks such as Request , Beautiful Soup
Read this out, or our HR is in trouble :(
We are looking for young, enthusiastic & rebellious individuals to help us skyrocket our business activities. Haan Bhai, sach mei 😇
Business Development Executive responsibilities include discovering and pursuing new sales prospects, negotiating deals and maintaining customer satisfaction ❌
Hmmm: No, you are going to identify & take down the target who would possibly love to hear about us, be comfortable in being rejected, take No for an answer when you hear a No & moreove and keeps these humans happy when they join us ✅
We’d love to meet you if you are not shy but shameless and present to the whole world what you are going to build with us & most importantly, democratise the educational ecosystem.
Ultimately, you’ll help us meet and surpass business expectations and contribute to our company’s rapid and sustainable growth ( Formally, I had to put this down ) 😌
Job Type: Full-time 👀
Salary: 3,60,000 fixed + 2,00,000 variable ( 5,60,000 LPA )
What will you get to do here?
- Conduct market research to identify selling possibilities and evaluate customer needs
- Actively seek out new sales opportunities through cold calling, networking and social media.
- Set up meetings with potential clients and listen to their wishes and concerns
- Prepare and deliver appropriate presentations on products and services
- Create frequent reviews and reports with sales and financial data and present them to your managers
- Participate on behalf of the company in exhibitions or conferences
- Negotiate/close deals and handle complaints or objections
- Collaborate with team members to achieve better results
- Gather feedback from customers or prospects and share it with internal teams
What are the qualities we are looking for?
- Proven experience as a Business Development Executive or freshers is also welcomed.
- Proficiency in English
- Hands-on experience with CRM software is a plus
- Thorough understanding of marketing and negotiating techniques
- Fast learner and passion for sales
- Self-motivated with a results-driven approach
- Aptitude in delivering attractive presentations
Schedule:
Morning shift
Supplemental pay types:
Commission pay
Performance bonus
COVID-19 considerations:
A double dose of vaccination has to be done.
Regular sanitisation shall be done on the office premises
Ability to commute/relocate:
JP Nagar, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required)
Company: Runo – https://runo.in
Location: Hyderabad, India
Who We Are:
Runo is a product based funded company incubated at T-Hub (IIITH Campus) and currently operating from Cyber Towers, HiTec City. We offer Call Management CRM for sales teams across the world. We are currently scaling up the engineering team to support the continued growth of the user acquisition in the global market, after a steady success in the Indian market.
The Role:
Runo is looking for a Lead Backend Developer to join our team. The lead backend developer is responsible for scaling up the existing architecture to handle billions of requests every month (We are currently at a scale of 100 million requests per month). You will be contributing in architecting the database/reporting layer to support complex queries and data exports of millions of records. You will be a lead developer responsible for development of new software products and enhancements to existing products.
Our Ideal candidate:
- Proficiency in NodeJS.
- Expert knowledge of NoSQL Databases (MongoDB is preferred)
- Hands on experience of working on products handling more than a million requests per month
- 4-8 years of relevant professional work experience
- Self-motivated, able to work independently. Commitment to high quality.
Responsibilities:
- Contribute to the product architecture in handling millions of requests per day.
- Optimizing the architecture for regional data zones.
- Develop high performance, reusable & bug free APIs
- Optimize the existing APIs for performance (by data denormalization/ decoupling the long running APIs using queues)
- Get familiar with the AWS Services like API Gateway, Lambdas, SNS, SQS, SES
What We Offer:
- Competitive Salary
- ESOPs
- Medical Insurance
- Developing telemetry software to connect Junos devices to the cloud
- Fast prototyping and laying the SW foundation for product solutions
- Moving prototype solutions to a production cloud multitenant SaaS solution
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics specialists to strive for greater functionality in our data systems.
Qualification and Desired Experiences
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- 5+ years experiences building data pipelines for data science-driven solutions
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Good team worker with excellent interpersonal skills written, verbal and presentation
- Create and maintain optimal data pipeline architecture,
- Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
- Previous work in a start-up environment
- 3+ years experiences building data pipelines for data science-driven solutions
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and interpersonal skills.
- Experience supporting and working with multi-functional teams in a multidimensional environment.
https://electronlabs.org/">Electron Labs is building a new protocol to make various blockchains interoperable. Our goal is to enable cross-chain contracts to connect with each other in the same way as same-chain contracts connect i.e., via function calls. We have developed a new in-house tech called https://garvitgoel.medium.com/what-is-bdlc-and-how-it-works-5716fbbacde8">Bi-Directional Light Client that allows a smart contract to independently verify whether the cross-chain transactions submitted to it are valid. We further use ZK-SNARKS to reduce the gas cost of running the light client on-chain.
Desired Qualifications
- A good grasp of blockchain fundamentals, various consensus algorithms, and detailed knowledge of at least one major blockchain.
- Previous work (demonstrable) in blockchain technology, web3, or protocol development
- In-depth understanding of Number Theory and Cryptography. Knowledge of Cryptographic Algorithms and zero-knowledge proofs are highly preferred.
- Superior mathematics and problem-solving skills, excellent pattern recognition, passion for solving puzzles.
- Proficiency in one of the following languages: Go-lang, Rust, is highly preferred. Working knowledge of Solidity, Javascript, Python, or similar scripting languages puts you in front of our list
- Grounded research skills and an ability to absorb documentation quickly
- Strong background in Computer Science. Knowledgeable in Algorithms and Data Structures.
Roles and Responsibilities
- You will be assigned one blockchain (Ethereum /Polygon /NEAR /tendermint /Polkadot/Aleo). You will be required to implement the light client of this blockchain in https://docs.circom.io/">Circom language (Domain Specific Language for ZK-SNARKS).
- Write go-lang based networking relayers that enable communication between blockchains and zk-provers.
- Maintain relayer and zk-prover infrastructure (cloud machine and codebase)
- Write kickass documentation.
Location: Delhi NCR, India | Remote
About MoEngage
MoEngage is a fast-paced startup that’s helping companies run smart marketing efforts in their effort to reach the customer. We are a leading Marketing Technology Stack provider that is helping brands redefine their customer engagement in the mobile era. Brands use MoEngage to drive long-term, personalised and context-based engagement across channels to help achieve increased customer retention as well as customer LTV. Sitting at a conflux of diverse technologies like Artificial Intelligence, Big Data, Web & Mobile platforms, MoEngage technology analyses billions of data points generated by customers and their devices, to predict customer behavior and build marketing campaigns that proactively engage users.
In just four years since inception, MoEngage is working with leading brands across e-commerce, entertainment, travel, publishing and banking domains among others. With marquee clients like Vodafone, Oyo, Airtel, McAfee, MoEngage has over 125+ paying Customers in the Enterprise & Internet companies space in India, US, South East Asia & EU. With a global presence spanning 35 countries, MoEngage has offices in San Francisco, Berlin, Jakarta, and Bengaluru.
Today, MoEngage is an industry pioneer in the space and engages more than 350M devices. This includes approximately 40B events tracked per month, 30B+ messages sent, to millions of users across the globe.
As part of the Engineering team at MoEngage, here are some things you can expect:
- Take ownership and be responsible for what you build - no micro management
- Work with A players (some of the best talent in the country), and expedite your learning curve and career growth
- Make in India and build for the world at scale of 350M active users, which no other internet company in the country has seen
- Learn together from different teams on how they scale to millions of users and billions of messages.
- Explore the latest in topics like Data Pipeline, MongoDB, ElasticSearch, Kafka, Spark, Samza and share with the team
and more importantly have fun while you work on scaling MoEngage.
About Push team,
Push team is one of the core teams at MoEngage, responsible for sending close to a billion notifications everyday to help clients engage their users better. As a member of the Push team, you will be working on developing high performance solutions to deliver personalised and context-based notifications across various channels to help achieve increased customer retention as well as customer LTV. You will also be working on designing and building features to help clients to provide customised experience for end users allowing them to have a more personalised experience at scale. Here you will have a chance to own systems and develop features end to end i.e right from inception to deployment. Though we work at scale, reliability is of utmost importance for us and we build in house solutions like Campaign Watcher & AutoBatchRunner to ensure 100% transparency and delivery of notifications
- Scaling campaign sending system to ensure industry leading delivery times (40 Million notifications under 2 minutes)
- Rich campaign content delivery and templating support
- Build and develop features to have appealing and consistent experiences across channels which touch 200+ customers and 200+ million users!
Skill Requirements
- Proven experience in handling large infrastructure and distributed systems
- Familiarity with Python related technologies and frameworks like Django or Pyramid.
- Familiarity with at least one of the cloud computing infrastructure - GCP / Azure / AWS
- Familiarity with task queue frameworks like Celery or Pika is a plus.
- Tech Stack - Python, Falcon, Elastic Search, MongoDB, AWS (SQS S3), Linux, Map Reduce








