
1.4+ years of software development experience
2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.
3. Hands-on with NATS for event-driven architecture and streaming.
4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.
5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.
6. Proficient in Java (Spring Boot) and Python (Flask) for building scalable applications and APIs.
7. Focus: Java, Python, Kubernetes, Cloud-native development

Similar jobs
Job Description
We are seeking a talented and experienced Java SpringBoot Microservices Developer
to join our dynamic development team. As a Java SpringBoot Microservices Developer,
you will be responsible for designing, developing, and maintaining scalable and
high-performance microservices-based applications using Java and SpringBoot
frameworks.
Responsibilities:
● Collaborate with cross-functional teams to gather and analyze requirements for
the development of microservices applications.
● Design, develop, and implement robust and scalable microservices using Java
and SpringBoot.
● Build RESTful APIs and integrate them with external systems as required.
● Ensure the performance, security, and reliability of the microservices through
thorough testing and debugging.
● Participate in code reviews to ensure code quality, maintainability, and adherence
to coding standards.
● Troubleshoot and resolve technical issues related to microservices and their
integration with other components
● Continuously research and evaluate emerging technologies and industry trends
related to microservices and recommend improvements to enhance application
development.
Requirements:
● Bachelor's degree in Computer Science, Software Engineering, or a related field.
● Strong experience in Java development, specifically with SpringBoot framework.
● Proficiency in designing and developing microservices architectures and
implementing them using industry best practices.
● Solid understanding of RESTful API design principles and experience in building
and consuming APIs.
● Knowledge of cloud platforms and experience with containerization technologies
(e.g., Docker, Kubernetes) is highly desirable.
● Familiarity with agile development methodologies and tools (e.g., Scrum, JIRA) is
a plus.
● Excellent problem-solving and analytical skills with a keen attention to detail.
● Effective communication and collaboration skills to work effectively within a team
environment.
If you are a passionate Java developer with a strong focus on building scalable
microservices applications using SpringBoot, we would love to hear from you. Join our
team and contribute to the development of cutting-edge solutions that deliver
exceptional user experiences.
To apply, please submit your resume and a cover letter outlining your relevant
experience and achievements in Java SpringBoot microservices development.
Role: Sr. Java Developer
Experience: 6+ Years
Location: Bangalore (Whitefield)
Work Mode: Hybrid (2-3 days WFO)
Shift Timing: Regular Morning Shift
About the Role:
We are looking for a seasoned Java Developer with 6+ years of experience to join our growing engineering team. The ideal candidate should have strong expertise in Java, Spring Boot, Microservices, and cloud-based deployment using AWS or DevOps tools. This is a hybrid role based out of our Whitefield, Bangalore location.
Key Responsibilities:
- Participate in agile development processes and scrum ceremonies.
- Translate business requirements into scalable and maintainable technical solutions.
- Design and develop applications using Java, Spring Boot, and Microservices architecture.
- Ensure robust and reliable code through full-scale unit testing and TDD/BDD practices.
- Contribute to CI/CD pipeline setup and cloud deployments.
- Work independently and as an individual contributor on complex features.
- Troubleshoot production issues and optimize application performance.
Mandatory Skills:
- Strong Core Java and Spring Boot expertise.
- Proficiency in AWS or DevOps (Docker & Kubernetes).
- Experience with relational and/or non-relational databases (SQL, NoSQL).
- Sound understanding of Microservices architecture and RESTful APIs.
- Containerization experience using Docker and orchestration via Kubernetes.
- Familiarity with Linux-based development environments.
- Exposure to modern SDLC tools – Maven, Git, Jenkins, etc.
- Good understanding of CI/CD pipelines and cloud-based deployment.
Soft Skills:
- Self-driven, proactive, and an individual contributor.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal abilities.
- Able to plan, prioritize, and manage tasks independently.
Nice-to-Have Skills:
- Exposure to frontend technologies like Angular, JavaScript, HTML5, and CSS.
Roles and Responsibilities:
● Design, architect, and drive implementation of entire products and key engineering features
● Plan and track the development and release schedules, and provide visible leadership in crisis
● Establish and follow best practices, including coding standards and technical task management
● Involve in the end-to-end architecture design and implementation
● Deal with a dynamic project or feature with changing requirements
● Recruit, motivate and develop a superior team, establish a clear definition of functional excellence and create a culture based on those practices
Experience and Skills you MUST have:
● Web application development and micro-services architecture using JavaScript, Python, and JavaScript libraries and ReactJs and other web technologies - HTML, CSS, and JS
● Experience with RDBMS or NoSQL Database technologies like MySQL, MongoDB, PostgreSQL, ElasticSearch, and Redis
● End to end deployment and DevOps using - Kubernetes, Docker, GCP Stack
● Knowledge of Jenkins, Prometheus, New Relic, and other tools
● Experience implementing testing platforms and unit tests
● Rock solid at working with third-party dependencies and debugging dependency conflict
Hiring For SDE II - Python (Remote)
The Impact you will create:
-
Build campaign generation services which can send app notifications at a speed of 10 million a minute
-
Dashboards to show Real time key performance indicators to clients
-
Develop complex user segmentation engines which creates segments on Terabytes of data within few seconds
-
Building highly available & horizontally scalable platform services for ever growing data
-
Use cloud based services like AWS Lambda for blazing fast throughput & auto scalability
-
Work on complex analytics on terabytes of data like building Cohorts, Funnels, User path analysis, Recency Frequency & Monetary analysis at blazing speed
-
You will build backend services and APIs to create scalable engineering systems.
-
As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
-
You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
-
Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
-
Identify and improvise areas of improvement through data insights and research.
Primary Responsibilities
-
End-to-end ownership of product development, from design, through implementation, testing, deployment, and maintenance
-
Translating high-level requirements and end-user use cases into design proposals, decomposing complex features into smaller, short-term deliverable tasks
-
Maintaining constant focus on scalability, performance and robustness of architecture
-
Designing and implementing logging, monitoring and alerting systems for existing and new infrastructure
-
Documenting API's and architecture design
-
Mentor and guide juniors on their path to become solid developers
What we look for?
-
4+ of industry experience in technical leadership roles
-
Solid knowledge of Python, SQL, NoSQL, shell scripting and Linux operating environment
-
End-to-end experience in design and development of highly scalable enterprise and cloud data products
-
Ability to challenge and redefine existing architecture to create robust, scalable and reliable products
-
Hands-on experience with design and troubleshooting of scalable web services, queue based systems, distributed databases and streaming services
-
Experience with modern DevOps technologies such as kOps, Kubernetes and Docker, CI/CD, monitoring and autoscaling

Mandatory Criteria :
- Candidate must have Strong hands-on experience with Kubernetes of atleast 2 years in production environments.
- Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Candidate should have strong Backend experience.
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics.
Strong Core Java / C++ experience
· Excellent understanding of Logical ,Object-oriented design patterns, algorithms and data structures.
· Sound knowledge of application access methods including authentication mechanisms, API quota limits, as well as different endpoint REST, Java etc
· Strong exp in databases - not just a SQL Programmer but with knowledge of DB internals
· Sound knowledge of Cloud database available as service is plus (RDS, CloudSQL, Google BigQuery, Snowflake )
· Experience working in any cloud environment and microservices based architecture utilizing GCP, Kubernetes, Docker, CircleCI, Azure or similar technologies
· Experience in Application verticals such as ERP, CRM, Sales with applications such as Salesforce, Workday, SAP < Not Mandatory - added advantage >
· Experience in building distributed systems < Not Mandatory - added advantage >
· Expertise on Data warehouse < Not Mandatory - added advantage >
· Exp in developing & delivering product as SaaS i< Not Mandatory - added advantage
CricStox is a Pune startup building a trading solution in the realm of gametech x fintech.
We intend to build a sport-agnostic platform to allow trading in stocks of sportspersons under any sport
through our mobile & web-based applications.
We’re currently hiring a Backend Cloud Engineer who will gather, refine specifications and requirements
based on technical needs and implement the same by using best software development practices.
Responsibilities?
● Mainly, but not limited to maintaining, expanding, and scaling our microservices/ app/ site.
● Integrate data from various back-end services and databases.
● Always be plugged into emerging technologies/industry trends and apply them into operations and
activities.
● Comfortably work and thrive in a fast-paced environment, learn rapidly and master diverse web
technologies and techniques.
● Juggle multiple tasks within the constraints of timelines and budgets with business acumen.
What skills do I need?
● Excellent programming skills in Javascript or Typescript.
● Excellent programming skills in Nodejs with Nestjs framework or equivalent.
● A solid understanding of how web applications work including security, session management, and
best development practices.
● Good working knowledge and experience of how AWS cloud infrastructure works including services
like APIGateway, Cognito, S3, EC2, RDS, SNS, MSK, EKS is a MUST.
● Solid understanding of distributed event streaming technologies like Kafka is a MUST.
● Solid understanding of microservices communication using Saga Design pattern is a MUST.
● Adequate knowledge of database systems, OOPs and web application development.
● Adequate knowledge to create well-designed, testable, efficient APIs using tools like Swagger (or
equivalent).
● Good functional understanding of ORMs like Prisma (or equivalent).
● Good functional understanding of containerising applications using Docker.
● Good functional understanding of how a distributed microservice architecture works.
● Basic understanding of setting up Github CI/CD pipeline to automate Docker images building,
pushing to AWS ECR & deploying to the cluster.
● Proficient understanding of code versioning tools, such as Git (or equivalent).
● Hands-on experience with network diagnostics, monitoring and network analytics tools.
● Aggressive problem diagnosis and creative problem-solving skills.
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 3+ hands-on demonstrable experience with: ▪ Stream & Batch Big Data Pipeline Processing using Apache Spark and/or Apache Flink.
▪ Distributed Cloud Native Computing including Server less Functions
▪ Relational, Object Store, Document, Graph, etc. Database Design & Implementation
▪ Micro services Architecture, API Modeling, Design, & Programming -
3+ years of hands-on development experience in Apache Spark using Scala and/or Java.
-
Ability to write executable code for Services using Spark RDD, Spark SQL, Structured Streaming, Spark MLLib, etc. with deep technical understanding of Spark Processing Framework.
-
In-depth knowledge of standard programming languages such as Scala and/or Java.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Apache Kafka, Akka, Apache Storm, Apache Nifi, Zookeeper, Hadoop ecosystem (i.e., HDFS, YARN, MapReduce, Oozie & Hive), etc.; extra points if you can demonstrate your knowledge with working examples.
-
3+ years of hands-on development experience in one or more Relational and NoSQL datastores such as PostgreSQL, Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc.
-
Practical knowledge of distributed systems involving partitioning, bucketing, CAP theorem, replication, horizontal scaling, etc.
-
Passion for distilling large volumes of data, analyze performance, scalability, and capacity performance issues in Big Data Platforms.
-
Ability to clearly distinguish system and Spark Job performances and perform spark performance tuning and resource optimization.
-
Perform benchmarking/stress tests and document the best practices for different applications.
-
Proactively work with tenants on improving the overall performance and ensure the system is resilient, and scalable.
-
Good understanding of Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstrable Cloud Platform experience.
-
Good understanding of Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Good understanding of Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment.








