
- Build campaign generation services which can send app notifications at a speed of 10 million a minute
- Dashboards to show Real time key performance indicators to clients
- Develop complex user segmentation engines which creates segments on Terabytes of data within few seconds
- Building highly available & horizontally scalable platform services for ever growing data
- Use cloud based services like AWS Lambda for blazing fast throughput & auto scalability
- Work on complex analytics on terabytes of data like building Cohorts, Funnels, User path analysis, Recency Frequency & Monetary analysis at blazing speed
- You will build backend services and APIs to create scalable engineering systems.
- As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
- You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
- Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
- Identify and improvise areas of improvement through data insights and research.
- 2-5 years of experience in backend development and must have worked on Java/shell/Perl/python scripting.
- Solid understanding of engineering best practices, continuous integration, and incremental delivery.
- Strong analytical skills, debugging and troubleshooting skills, product line analysis.
- Follower of agile methodology (Sprint planning, working on JIRA, retrospective etc).
- Proficiency in usage of tools like Docker, Maven, Jenkins and knowledge on frameworks in Java like spring, spring boot, hibernate, JPA.
- Ability to design application modules using various concepts like object oriented, multi-threading, synchronization, caching, fault tolerance, sockets, various IPCs, database interfaces etc.
- Hands on experience on Redis, MySQL and streaming technologies like Kafka producer consumers and NoSQL databases like mongo dB/Cassandra.
- Knowledge about versioning like Git and deployment processes like CICD.

Similar jobs
Key Skills & Requirements
- Strong proficiency in Java 11 and above
- Hands-on expertise in Spring Boot and Microservices Architecture
- Strong Programming, Analytical, and Problem-Solving skills
- Proficiency with NoSQL Databases (MongoDB, Cosmos DB) and RDBMS (SQL/Oracle/Postgres)
- Experience with Messaging Queues (RabbitMQ/Kafka)
- Good understanding of Functional and Domain knowledge
- Expertise in Project Architecture & Data Flows
- Proficient in CI/CD tools (Jenkins or equivalent)
- Strong experience with Testing Frameworks: JUnit, Mockito, Cucumber, BDD
- Basic knowledge of Cloud Platforms (Azure / AWS)
- Familiarity with Project Management Tools – JIRA, Confluence, ServiceNow
- Monitoring tools knowledge like New Relic, Splunk, Nagios (good to have)
Responsibilities
- Design, develop, and maintain scalable backend applications using Java 11+ and Spring Boot
- Build and manage microservices-based solutions ensuring high performance and low latency
- Collaborate with cross-functional teams to define, design, and ship new features
- Implement best practices for CI/CD pipelines and automate deployment workflows
- Ensure code quality through unit testing, integration testing, and BDD practices
- Work with databases (SQL/NoSQL) for data modeling and optimization
- Monitor, troubleshoot, and enhance system performance using tools like Splunk, New Relic, Nagios
- Provide technical guidance and mentorship to team members
Shift timings : Afternoon
Job Summary
We are seeking an experienced Senior Java Developer with strong expertise in legacy system migration, server management, and deployment. The candidate will be responsible for maintaining, enhancing, and migrating an existing Java/JSF (PrimeFaces), EJB, REST API, and SQL Server-based application to a modern Spring Boot architecture. The role involves ensuring smooth production deployments, troubleshooting server issues, and optimizing the existing infrastructure.
Key Responsibilities
● Maintain & Enhance the existing Java, JSF (PrimeFaces), EJB, REST API, andSQL Server application.
● Migrate the legacy system to Spring Boot while ensuring minimal downtime.
● Manage deployments using Ansible, GlassFish/Payara, and deployer.sh scripts.
● Optimize and troubleshoot server performance (Apache, Payara, GlassFish).
● Handle XML file generation, email integrations, and REST API maintenance.
● Database management (SQL Server) including query optimization and schema updates.
● Collaborate with teams to ensure smooth transitions during migration.
● Automate CI/CD pipelines using Maven, Ansible, and shell scripts.
● Document migration steps, deployment processes, and system architecture.
Required Skills & Qualifications
● 8+ years of hands-on experience with Java, JSF (PrimeFaces), EJB, and REST APIs.
● Strong expertise in Spring Boot (migration experience from legacy Java is a must).
● Experience with Payara/GlassFish server management and deployment.
● Proficient in Apache, Ansible, and shell scripting (deployer.sh).
● Solid knowledge of SQL Server (queries, stored procedures, optimization).
● Familiarity with XML processing, email integrations, and Maven builds.
● Experience in production deployments, server troubleshooting, and performance tuning.
● Ability to work independently and lead migration efforts.
Preferred Skills
● Knowledge of microservices architecture (helpful for modernization).
● Familiarity with cloud platforms (AWS/Azure) is a plus.
What you will do?
Project Management
- Work with the product management team to create an engineering roadmap that aligns with the product roadmap.
- Translate engineering roadmap into executable internal projects, and own the end-to-end execution of these projects.
- Plan, execute and deliver projects as per schedule, content, and quality metrics.
- Manage the day-to-day activities of the engineering team using Agile practices.
- Keep stakeholders continually updated on the progress of projects and operations.
- Track and report engineering health metrics (such as bugs by severity, production incidents etc.)
People Management
- Hire and mentor a team of engineers
- Manage learning and development, and performance of your team.
- Own, Conceptualize and Build a tech-focused team
Technical Work
- Core stakeholder to all technical design and architecture of team.
- Review code, test plans, and deployment plans.
- Focus and Strengthen all aspects of reliability
What we look for?
Must Haves
- Experience in telecom messaging software systems.
- Experience in distributed and scalable systems.
- Communication skills - excellent written and oral communication to present complex ideas/concepts in a clear and concise manner; communicating with key stakeholders with work/project progress.
Good to Have
- Experience in telecom messaging software systems.
- Experience in distributed and scalable systems.
- Communication skills - excellent written and oral communication to present complex ideas/concepts in a clear and concise manner; communicating with key stakeholders with work/project progress
Have you streamed a program on Disney+, watched your favorite binge-worthy series on Peacock or cheered your favorite team on during the World Cup from one of the 20 top streaming platforms around the globe? If the answer is yes, you’ve already benefitted from Conviva technology, helping the world’s leading streaming publishers deliver exceptional streaming experiences and grow their businesses.
Conviva is the only global streaming analytics platform for big data that collects, standardizes, and puts trillions of cross-screen, streaming data points in context, in real time. The Conviva platform provides comprehensive, continuous, census-level measurement through real-time, server side sessionization at unprecedented scale. If this sounds important, it is! We measure a global footprint of more than 500 million unique viewers in 180 countries watching 220 billion streams per year across 3 billion applications streaming on devices. With Conviva, customers get a unique level of actionability and scale from continuous streaming measurement insights and benchmarking across every stream, every screen, every second.
As Conviva is expanding, we are building products providing deep insights into end user experience for our customers.
Platform and TLB Team
The vision for the TLB team is to build data processing software that works on terabytes of streaming data in real time. Engineer the next-gen Spark-like system for in-memory computation of large time-series dataset’s – both Spark-like backend infra and library based programming model. Build horizontally and vertically scalable system that analyses trillions of events per day within sub second latencies. Utilize the latest and greatest of big data technologies to build solutions for use-cases across multiple verticals. Lead technology innovation and advancement that will have big business impact for years to come. Be part of a worldwide team building software using the latest technologies and the best of software development tools and processes.
What You’ll Do
This is an individual contributor position. Expectations will be on the below lines:
- Design, build and maintain the stream processing, and time-series analysis system which is at the heart of Conviva's products
- Responsible for the architecture of the Conviva platform
- Build features, enhancements, new services, and bug fixing in Scala and Java on a Jenkins-based pipeline to be deployed as Docker containers on Kubernetes
- Own the entire lifecycle of your microservice including early specs, design, technology choice, development, unit-testing, integration-testing, documentation, deployment, troubleshooting, enhancements etc.
- Lead a team to develop a feature or parts of the product
- Adhere to the Agile model of software development to plan, estimate, and ship per business priority
What you need to succeed
- 9+ years of work experience in software development of data processing products.
- Engineering degree in software or equivalent from a premier institute.
- Excellent knowledge of fundamentals of Computer Science like algorithms and data structures. Hands-on with functional programming and know-how of its concepts
- Excellent programming and debugging skills on the JVM. Proficient in writing code in Scala/Java/Rust/Haskell/Erlang that is reliable, maintainable, secure, and performant
- Experience with big data technologies like Spark, Flink, Kafka, Druid, HDFS, etc.
- Deep understanding of distributed systems concepts and scalability challenges including multi-threading, concurrency, sharding, partitioning, etc.
- Experience/knowledge of Akka/Lagom framework and/or stream processing technologies like RxJava or Project Reactor will be a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
- Excellent communication skills. Willingness to work under pressure. Hunger to learn and succeed. Comfortable with ambiguity. Comfortable with complexity
Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their businesses ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision, and Warner Bros Discovery.
Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets!
● Own the development and deployment of one or more integral module (APIs and Services) that is part of AgroStar’s technology platform, used by farmers for communication, decision support and ecommerce
● Collaborate with the Product Managers and Senior Engineers for
understanding and designing the modules.
● Own the quality of module, and work with QA and Tech Support teams to identify hotspots, and bring continuous improvements.
● Bachelor’s degree or higher, with course in Programming/Data
Structures/Algorithms
● Experiences of programming in Python or GoLang or Web Technologies
(HTML, CSS, Javascript)
● Sound understanding of Data Structures, Algorithms and Object Oriented
Design and Databases (mySQL or mongoDB)
● Passionate about Software Engineering practices
● Excellent communication skills
Mixing technology, data, and first-in-class innovation, EagleView® is not only leading the property data analytics market, but also changing lives along the way. Come join us and make great things happen!
EagleView is a fast-growing technology company driving game changing innovation in multibillion- dollar markets such as property insurance, energy, construction, and government. Leveraging 17 years of the most advanced aerial imaging technology in the world, along with the most recent advances in machine learning and AI, EagleView is fundamentally transforming how our customers do business.
At EagleView, we believe that making our culture engaging and empowering are keys to success.
Job Description
We are looking for a talented Software Engineers to join our agile development team. As an experienced member, you will participate in all aspects of the software development life cycle: scoping, design, coding, testing, implementation and support. You will help in the development of highly available, scalable, secure and flexible solutions for our ecommerce platform. In this role, you must be able to multi-task, quickly adapt to new development environments, learn new systems, create reliable/maintainable code, and find creative and scalable solutions to difficult and complex problems. You take pride and ownership in your work as well as the overall contributions of the team. You must also have the ability to take a system-wide understanding, recognize use of system components and disparate technologies and be able to diagnose and debug components across an entire system. Your ability to communicate clearly and concisely (both written and verbal) is also key as is being a self-starter.
* Bachelor's Degree required, preferably in Computer Science or related field
* 1+ years of software development experience using Java/C++/Golang/Python or any of the object oriented programming language
* Experience working on JavaScript would be an added advantage
* Experience using Linear Algebra, 3D Scene model, Cairo - 2D Drawing Framework will be an added advantage
* Working experience using any of these: POV-RAY, GeoTiff, WGS 84 and Web Mercator projection will be a added advantage
* Experience with cloud technologies AWS SDK and containerization
* Experience with PostgreSQL or other Relational Databases
* Test-driven development mindset and a focus on quality, scalability and performance
* Strong programming fundamentals and ability to produce high quality code
* Excellent communication, collaboration, reporting, analytical and problem-solving skills
* Solid understanding of Agile (SCRUM) Development Process required
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for technical leaders with passion and experience in architecting and delivering high-quality distributed systems at massive scale.
Responsibilities & ownership
- Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product
- Lead and mentor others about concurrency, parallelization to deliver scalability, performance and resource optimization in a multithreaded and distributed environment
- Propose and promote strategic company-wide tech investments taking care of business goals, customer requirements, and industry standards
- Lead the team to solve complex, unknown and ambiguous problems, and customer issues cutting across team and module boundaries with technical expertise, and influence others
- Review and influence designs of other team members
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Partner with other leaders to nurture innovation and engineering excellence in the team
- Drive priorities with others to facilitate timely accomplishments of business objectives
- Perform RCA of customer issues and drive investments to avoid similar issues in future
- Collaborate with Product Management, Support, and field teams to ensure that customers are successful with Dremio
- Proactively suggest learning opportunities about new technology and skills, and be a role model for constant learning and growth
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 15+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models and their use in developing distributed and scalable systems
- 8+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Subject Matter Expert in one or more of query processing or optimization, distributed systems, concurrency, micro service based architectures, data replication, networking, storage systems
- Experience in taking company-wide initiatives, convincing stakeholders, and delivering them
- Expert in solving complex, unknown and ambiguous problems spanning across teams and taking initiative in planning and delivering them with high quality
- Ability to anticipate and propose plan/design changes based on changing requirements
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Hands-on experience of working projects on AWS, Azure, and GCP
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and GCP)
- Understanding of distributed file systems such as S3, ADLS or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
Libraries, Interface, Language Fundamentals
Data Structures, Algorithms, Collections
Design Patterns, Singletons
Multithreading
Messaging, CI/CD
Databases
Tooling:
Application Layering, Architectural Design
Unit Testing/ Integration Testing
Any Devops tooling experience (Docker/ Kubernates/ Terraforms)
Tool Configuration and Log Monitoring
Role and Responsibilities:
- As a backend developer, your primary focus will be the development of all server-side systems
- A basic understanding of front-end technologies is necessary as well. You will test, secure and deploy your code
- Work experience on Node.js is a must along with a server-side framework like Express.js
- Strong proficiency in JavaScript
- Writing reusable, testable, and efficient code
- Experience and proficiency integrating with REST APIs
- Understanding of scalable computing systems, software architecture, data structures, and algorithms
- Experience in working with databases such as MongoDB, Redis, Elasticsearch, etc.
- AgileScrum development cycle understanding.
Skills Required:
- At least 2 years of experience developing backends using NodeJS should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Good knowledge of MongoDB(Must) & any other MySQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience developing and deploying REST APIs.
- Knowledge and working experience in Cloud environment - AWS or Azure
- Good knowledge of Unit Testing and available Test Frameworks.
- Should be a fast learner and a go-getter without any fear of trying out new things










