
- 3 - 6 Years of Experience in Golang Development
- Understands requirements well and comes up with an efficient design
- Develops complex, well-backed and bug-free products.
- Estimates accurately.
- Takes well-reasoned tech decisions keeping in mind goals and trade-offs
- Become a go-to person in more than one area. Provide technical mentoring to team
- Communicates clearly, gets clarifications, and establishes expectations for all parties
- Helps establish SDLC best practices and high standards of code quality
- Demonstrates excellent problem solving & debugging skills
- Proactively identifies and resolves issues in requirements, design, and code
Ideal Candidate Profile:
- Solid experience in Golang is a must.
- Solid understanding of Apache Products.
- Should have experience in cloud computing (AWS is desired).
- Has an ability to quickly learn and contribute in multiple codebases
- Overcomes roadblocks and requires minimal oversight
- Takes initiatives to fix issues/tech debts before assigned to him/her
- Able to deep dive into the codebase and advise QA of possible regression impact
- Communicates tech decisions through design docs and tech talks
- Has delivered projects with end-to-end accountability
- Keeps track of industry trends and introduces the right tech/ tools for a given job
- Excellent understanding of software engineering practices,
- Design Patterns, Data Structures, Algorithms
- Experience in product-driven organization

Similar jobs
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
● Complete ownership of the components that one works on - starting from
architecture design to monitoring metrics post deployment
Strong bias for action with a problem solving mindset. Meeting current
requirements or resolving issues while keeping bigger picture in mind.
● Thinking big while designing components. Keeping in mind that if we build our
current version for x users, it will be 10x within a year & 100x within 1.5 years
hands on experience in software development with excellent
problem-solving skills.
● Hands on experience on building highly-available, scalable systems.
● Expertise in Java with data structures, algorithms, spring, hibernate skills.
● Knowledge of NoSQL, MongoDB, caching data stores, queuing and search.
● Proficiency in computer science fundamentals: Object oriented design, data
structures, algorithm design, and complexity analysis.
● Good with the design patterns and architecture solution to large scale
applications.
● Nice to have: Cassandra, Kafka, Aerospike development experience with highly
scalable and performant software systems.
Excellent knowledge in Core Java (J2SE) and J2EE technologies.
Hands-on experience with RESTful services, API design are must.
Knowledge of microservices architecture is must.
Knowledge of design patterns is must.
Strong knowledge in Exception handling and logging mechanism is must.
Agile scrum participation experience. Work experience with several agile teams on an application built
with microservices and event-based architectures to be deployed on hybrid (on-prem/cloud)
environments.
Good knowledge of Spring framework (MVC, Cloud, Data and Security. Etc) and ORM framework like
JPA/Hibernate.
Experience in managing the Source Code Base through Version Control tool like SVN, GitHub,
Bitbucket, etc.
Experience in using and configuration of Continuous Integration tools Jenkins, Travis, GitLab, etc.
Experience in design and development of SaaS/PaaS based architecture and tenancy models.
Experience in SaaS/PaaS based application development used by a high volume of
subscribers/customers.
Awareness and understanding of data security and privacy.
Experience in performing Java Code Review using review tools like SonarQube, etc.
Good understanding of end-to-end software development lifecycle. Ability to read and understand
requirements and design documents.
Good Analytical skills and should be self-driven.
Good communication with inter-personal skills.
Open for learning new technologies and domain.
A good team player and ready to take up new challenges. Active communication and coordination with
Clients and Internal stake holder
Requirements: Skills and Qualifications
6-8 years of experience in developing Java/J2EE based Enterprise Web Applications
Languages: Java, J2EE, and Python
Databases: MySQL, Oracle, SQL Server, PostgreSQL, Redshift, MongoDB
DB Script: SQL and PL/SQL
Frameworks: Spring, Spring Boot, Jersey, Hibernate and JPA
OS: Windows, Linux/Unix.
Cloud Services: AWS and Azure
Version Controls/ Devops tools: Git, Bitbucket and Jenkins.
Message brokers: RabbitMQ, and Kafka
Deployment Servers: Tomcat, Docker, and Kubernetes
Build Tools: Gradle/Maven
About Company
Espressif Systems (688018) is a public multinational, fabless semiconductor company established in 2008, with headquarters in Shanghai and offices in Greater China, India, and Europe. We have a passionate team of engineers and scientists from all over the world, focused on developing cutting-edge WiFi-and-Bluetooth, low-power IoT solutions. We have created the popular ESP8266 and ESP32 series of chips, modules, and development boards. By leveraging wireless computing, we provide green, versatile, and cost-effective chipsets. We have always been committed to offering IoT solutions that are secure, robust, and power-efficient. By open-sourcing our technology, we aim to enable developers to use Espressif’s technology globally and build smart connected devices. In July 2019, Espressif made its Initial Public Offering on the Sci-Tech Innovation Board (STAR) of the Shanghai Stock Exchange (SSE).
Espressif has a technology center in Pune. The focus is on embedded software engineering and IoT solutions for our growing customers.
About the Role
Espressif’s https://rainmaker.espressif.com/ is a paradigm-shifting IoT cloud platform that provides seamless connectivity to IoT devices to mobile apps, voice assistants, and other services. It is designed with scalability, security, reliability, and operational cost at the center. We are looking for senior cloud engineers who can significantly contribute to this platform by means of architecture, design, and implementation. It’s highly desirable that the candidate has earlier experience of working on large-scale cloud product development and understand the responsibilities and challenges well. Strong hands-on experience in writing code in Go, Java, or Python is a must.
This is an individual contributor role.
Minimum Qualifications
-
BE/B.Tech in Computer Science with 5-10 years of experience.
-
Strong Computer Science Fundamentals.
-
Extensive programming experience in one of these programming languages ( Java, Go, Python) is a must.
-
Good working experience of any of the Cloud Platforms - AWS, Azure, Google Cloud Platform.
-
Certification in any of these cloud platforms will be an added advantage.
-
Good Experience in the development of RESTful APIs, handling the security and
performance aspects.
-
Strong debugging and troubleshooting skills.
-
Experience working with RDBMS or any NoSQL database like DynamoDB, MYSQL, Oracle.
-
Working knowledge about CI/CD tools - Maven/Gradle, Jenkins, experience in a Linux (or Unix) based environment.
Desired Qualifications
-
Exposure to Serverless computing frameworks like AWS Lambda, Google Cloud Functions, Azure Functions
-
Some Exposure to front end development tools - HTML5, CSS, Javascript, React.js/Anular.js
-
Working knowledge on Docker, Jenkins.
Prior experience working in the IoT domain will be an added advantage.
What to expect from our interview process
-
The first step is to email your resume or apply to the relevant open position, along with a sample of something you have worked on such as a public GitHub repo or side project, etc.
-
Next, post shortlisting your profile recruiter will get in touch with you via a mechanism that works for you e.g. via email, phone. This will be a short chat to learn more about your background and interests, to share more about the job and Espressif, and to answer any initial questions you have.
-
Successful candidates will then be invited for 2 to 3 rounds of the technical interviews as per the previous round feedback.
-
Finally, Successful candidates will have interviews with HR. What you offer us
-
Ability to provide technical solutions, support that fosters collaboration and innovation.
Ability to balance a variety of technical needs and priorities according to Espressif’s growing needs.
What we offer
- An open-minded, collaborative culture of enthusiastic technologists.
- Competitive salary
- 100% company paid medical/dental/vision/life coverage
- Frequent training by experienced colleagues and chances to take international trips, attend exhibitions, technical meetups, and seminars.
"Shypmax is India's first & only Cross border Logistics Platform backed by a contemporary product and premium service. We are one of the first IOSS ready courier service in India, focusing on compliance with
new regulations in the European Union (EU). We deliver to 220countries in the UK, USA, South East Asia, Australia, Europe, & Canada with 70+ carrier and network partnerships placed globally with a perfect
combination of technology and optimized shipping solutions."
Job Responsibilities:
Prior experience in deploying scalable infrastructure on the cloud.
Architect, design and develop web applications using Restful Services on Node.js.
Proficient understanding of code versioning tools, such as Git.
Lead code reviews to drive teams to the highest standards for Node.js web apps.
Strong background in algorithms, data structures, database design.
Experience in Node, Redis.
Experience in cloud infrastructure like google app engine/AWS/Heroku etc.
Design and create efficient RESTful endpoints for both internal and public consumption.
Participate in regular bug fixing Intimate knowledge of Git, Github, AWS, CDNs.
Experience with creating RESTful endpoints using the REST framework.
Developing high performing REST APIs for application functionality.
Necessary Requirement:
Min -2 years of Experience with Node.js
Min -1 years webhook Integration, Api Integration .
Hands on Experience working on Node. Js
Hands on Experience in REST APIs for application functionality.
Hands on Experience in cloud infrastructure
Job Responsibilities:
Support, maintain, and enhance existing and new product functionality for trading software in a real-time, multi-threaded, multi-tier server architecture environment to create high and low level design for concurrent high throughput, low latency software architecture.
- Provide software development plans that meet future needs of clients and markets
- Evolve the new software platform and architecture by introducing new components and integrating them with existing ones
- Perform memory, cpu and resource management
- Analyze stack traces, memory profiles and production incident reports from traders and support teams
- Propose fixes, and enhancements to existing trading systems
- Adhere to release and sprint planning with the Quality Assurance Group and Project Management
- Work on a team building new solutions based on requirements and features
- Attend and participate in daily scrum meetings
Required Skills:
- JavaScript and Python
- Multi-threaded browser and server applications
- Amazon Web Services (AWS)
- REST
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Selecting appropriate Cloud services to design and deploy an application based on given requirements
- Migrating complex, multi-tier applications on Cloud Platforms
- Designing and deploying enterprise-wide scalable operations on Cloud Platforms
- Implementing cost-control strategies
- Developing and maintain the CI/ CD pipeline for the assigned projects
- Conducting code reviews, and make technical contributions to product architecture
- Getting involved in solving bugs and delivering small features
- Fostering technical decision making on the team, but taking final decisions when necessary
- Understanding engineering metrics and seeking to improve them
- Understanding the requirements from the Product team, plan and execute.
What you need to have:
- Expert in designing Software and System architecture.
- Must have knowledge of Python (PHP knowledge is a plus) and related tools.
- Must understand MySQL queries and optimization.
- Must be able to build high performance teams.
- Must have worked with similar technologies: redis, docker, AWS, elasticsearch.
- Must know about microservice architectures and CI/ CD pipelines.
- Must be great at planning, researching and communicating.
- Must have a good understanding of application metrics.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for technical leaders with passion and experience in architecting and delivering high-quality distributed systems at massive scale.
Responsibilities & ownership
- Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product
- Lead and mentor others about concurrency, parallelization to deliver scalability, performance and resource optimization in a multithreaded and distributed environment
- Propose and promote strategic company-wide tech investments taking care of business goals, customer requirements, and industry standards
- Lead the team to solve complex, unknown and ambiguous problems, and customer issues cutting across team and module boundaries with technical expertise, and influence others
- Review and influence designs of other team members
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Partner with other leaders to nurture innovation and engineering excellence in the team
- Drive priorities with others to facilitate timely accomplishments of business objectives
- Perform RCA of customer issues and drive investments to avoid similar issues in future
- Collaborate with Product Management, Support, and field teams to ensure that customers are successful with Dremio
- Proactively suggest learning opportunities about new technology and skills, and be a role model for constant learning and growth
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 15+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models and their use in developing distributed and scalable systems
- 8+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Subject Matter Expert in one or more of query processing or optimization, distributed systems, concurrency, micro service based architectures, data replication, networking, storage systems
- Experience in taking company-wide initiatives, convincing stakeholders, and delivering them
- Expert in solving complex, unknown and ambiguous problems spanning across teams and taking initiative in planning and delivering them with high quality
- Ability to anticipate and propose plan/design changes based on changing requirements
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Hands-on experience of working projects on AWS, Azure, and GCP
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and GCP)
- Understanding of distributed file systems such as S3, ADLS or HDFS
- Excellent communication skills and affinity for collaboration and teamwork







