50+ Apache Kafka Jobs in India
Apply to 50+ Apache Kafka Jobs on CutShort.io. Find your next job, effortlessly. Browse Apache Kafka Jobs and apply today!
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Scala with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Scala
- Good experience in SQL – should be able to write complex queries.
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3, etc. (preferred)
- Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments (preferred)
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on AWS (preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
External Skills And Expertise
Must have Skills:
- Scala
- Spark
- SQL (Intermediate to advanced level)
- Spark Streaming
- AWS preferable/Any cloud
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
Good to Have Skills:
- AWS Certification
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Secondary Skills: Streaming, Archiving , AWS / AZURE / CLOUD
Role:
· Should have strong programming and support experience in Java, J2EE technologies
· Should have good experience in Core Java, JSP, Sevlets, JDBC
· Good exposure in Hadoop development ( HDFS, Map Reduce, Hive, HBase, Spark)
· Should have 2+ years of Java experience and 1+ years of experience in Hadoop
· Should possess good communication skills
Key Result Areas :
● Communication skills and clearness in your reporting and communication.
● Knowledge in the Java programming languages you use.
● Knowledge in the Spring Framework and libraries you use.
● Knowledge in the tool-sets you use.
● Analytical thinking and experience (practical when you design the architecture of the
“thing” prior to coding it).
● Technological understanding (ability to see your new “thing” in a wider perspective, for
example how a small library fits into a large project or product).
● Creativity (finding better ways to achieve your project goals).
● Coding (testable code, clean reusable code, maintainable code, readable code, bug-
free code, beautiful code).
● Correctness (few bugs, few iterations with refactoring).
● Learning (your ability to learn about and use new technologies, protocols, libraries, or
even languages as needed).
● Durability (to stay on track no matter what, even when you feel dead bored, or in way
over your head).
● Adherence to Effort and Schedule
● Team hand holding for day to day activities with team and monitor their progress
● Lead the team technically for the on time delivery and best efforts.
Essentials Skills:
● Strong Hands-on experience in Core Java, Spring framework, Maven, Rational Database.
● Comfortable with source code repository Github.
● Experience in developing REST APIs using Spring-MVC, Play Framework.
● Good to have No Sql, Neo4J, Cassandra, Elasticsearch.
● Experience in developing apache samza jobs (optional).
● Good understanding of CI-CD pipeline.
We are looking for an experienced Java Developer with strong proficiency in Kafka and MongoDB to join our dynamic team. The ideal candidate will have a solid background in designing and developing high-performance, scalable, and reliable applications in a microservices architecture. You will be responsible for building real-time data processing systems, integrating various services, and ensuring smooth data flow across systems.
Key Responsibilities:
- Design, develop, and maintain scalable Java applications with a focus on performance and reliability.
- Build and maintain Kafka-based real-time data pipelines for handling high-volume, low-latency data.
- Work with MongoDB to design and optimize database schemas and queries for high throughput and availability.
- Collaborate with cross-functional teams to define, design, and implement new features and improvements.
- Troubleshoot and resolve issues related to system performance, scalability, and reliability.
- Ensure software quality through best practices, including testing, code reviews, and continuous integration.
- Implement and maintain security best practices in both code and data handling.
- Participate in agile development cycles, including sprint planning, daily standups, and retrospectives.
Required Skills & Qualifications:
- 7+ years of experience in Java development, with a strong understanding of core Java concepts (J2EE, multithreading, etc.).
- Hands-on experience with Apache Kafka, including setting up brokers, producers, consumers, and understanding Kafka Streams.
- Proficient in working with MongoDB for designing efficient data models, indexing, and optimizing queries.
- Experience with microservices architecture and RESTful APIs.
- Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes is a plus.
- Strong understanding of distributed systems, message-driven architectures, and event streaming.
- Familiarity with version control systems like Git.
- Excellent problem-solving skills, with the ability to debug and optimize code for high-performance systems.
- Experience with CI/CD pipelines and automated testing.
global Customer Data Platform led personalization and real-time marketing automation solution provided by Immensitas Pvt. Ltd. and its subsidiaries to clients across the world on Software as a Service (SaaS) subscription basis. It is known to deliver superior customer experiences resulting in increased marketing leads, conversions, customer retention and growth for enterprises in different industries.
Education: B.E/B-Tech/M-Tech/M.S in Computer Science or IT from premier institutes
Skill Set:
1-3 years of relevant experience with Java, Algorithms, Data Structures, & Optimizations in
addition to Coding.
● Education: B.E/B-Tech/M-Tech/M.S in Computer Science or IT from premier institutes
Skill Set:
● Good Aptitude/Analytical skills (emphasis will be on Algorithms, Data Structures,&
Optimizations in addition to Coding)
● Good knowledge of Databases - SQL, NoSQL
● Knowledge of Unit Testing a plus
Soft Skills:
● Has an appreciation of technology and its ability to create value in the marketing domain
● Excellent written and verbal communication skills
● Active & contributing team member
● Strong work ethic with demonstrated ability to meet and exceed
Job Summary:
Senior Java developer will be responsible for many duties throughout the development lifecycle of applications, from concept and design right through to testing.
Duties/Responsibilities:
- To support and maintain existing Java code base, debug the application
- To analyse user and business requirements and design and implement appropriate solutions
- To design and code programs following in-house standards and good design principles
- To ensure that all programs are documented to the company standards
- To create unit test plans and perform unit testing of the programs
- To provide advice and guidance to other members of the team
Required Skills/Abilities:
- Hands on experience in designing and developing applications using Java EE platforms
- Object Oriented analysis and design using common design patterns
- Good knowledge of Relational Databases, SQL and ORM technologies (JPA2, Hibernate)
- Experience in the Spring Framework
- Experience in developing web applications using at least one popular web framework (JSF, Wicket, GWT, Spring MVC)
- Experience in RESTFul webservices
- Experience with test-driven development
- Exposure to portal/mobility development - Desired
- Exposure to any of middleware solutions like MQ, Oracle fusion middleware(WebLogic), WebSphere, Open Source
Lean provides developers with a universal API to access their customers' financial accounts across the Middle East. We recognized that infrastructure barriers were hindering fintech growth in our home markets and set out to build a solution. With Lean, developers at any level can now create advanced financial solutions without grappling with infrastructure complexities, allowing them to focus squarely on customer needs.
Why Join Us?
Our products have garnered enthusiastic feedback from both developers and customers. With Sequoia leading our $33 million Series A round, our debut in the GCC marks just the beginning. We're committed to expanding regionally and enhancing stakeholder value. If you thrive on solving challenges and making a lasting impact, Lean is the place for you.
We offer competitive salaries, private healthcare, flexible office hours, and ensure every team member holds meaningful equity. Join us on our journey of enabling the next wave of financial innovation!
About the Role
As a Staff Software Engineer, you will play a pivotal role in developing the infrastructure that supports the future of the financial ecosystem. We seek a motivated problem-solver who thrives on challenges and delivers compelling solutions. You'll lead knowledge sharing across our engineering team, mentor individuals, and contribute to complex feature design and development.
Requirements
- 8+ years of professional experience in Java, Spring Boot, and Microservices
- Proficiency in PostgreSQL, Kafka, Scripting, and REST
- Experience as a Squad/Tech Lead, adept at designing and developing complex features
- Worked in distributed teams and with open-source APIs
- Eagerness to learn and implement new technologies
- Familiarity with CI/CD and DevOps systems and processes
- Self-motivated with a preference for autonomy and ownership
- Startup experience and interest in the fintech industry, particularly Open Banking
Bonus
- Experience in developing payment solutions (p2p, B2C, B2B)
- Previous work in the financial sector
Join us and shape the future of fintech with Lean!
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
Work with scale, our infrastructure handles around 30 Billion+ API hits, 20 Billion+ message events, and more than 200 TeraBytes of data
About the role:
We are seeking an experienced Engineering Manager to lead our Generative AI teams. You will inspire and manage a team of engineers, driving performance and innovation. Your role will include developing robust AI-driven products and overseeing strategic project management.
Team-Specific Focus Areas:
Conversations AI
-Develop AI solutions for appointment booking, forms filling, sales, and intent recognition
-Ensure seamless integration and interaction with users through natural language processing and understanding
Workflows AI
-Create and optimize AI-powered workflows to automate and streamline business processes
Voice AI
-Focus on VOIP technology with an emphasis on low latency and high-quality voice interactions
-Fine-tune voice models for clarity, accuracy, and naturalness in various applications
Support AI
-Integrate AI solutions with FreshDesk and ClickUp to enhance customer support and ticketing systems
-Develop tools for automated response generation, issue resolution, and workflow management
Platform AI
-Oversee AI training, billing, content generation, funnels, image processing, and model evaluations
-Ensure scalable and efficient AI models that meet diverse platform needs and user demands
Requirements:
- Expertise with large scale Conversation Agents along with Response Evaluations
- Extensive hands-on experience with Node.Js and Vue.js (or React/Angular)
- Experience with scaling the services to at least 200k+ MAUs
- Bachelor's degree or equivalent experience in Engineering or related field of study
- 5+ years of engineering experience with 1+ years of management experience
- Strong people, communication, and problem-solving skills
Responsibilities:
- Mentor and coach individuals on the team
- Perform evaluations and give feedback to help them progress
- Planning for fast and flexible delivery by breaking down into milestones
- Increase efficiency by patterns, frameworks and processes
- Improving product and engineering quality
- Help drive product strategy
- Design and plan open-ended architecture that are flexible for evolving business needs
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
Work with scale, our infrastructure handles around 30 Billion+ API hits, 20 Billion+ message events, and more than 200 TeraBytes of data
About the role:
Seeking a Full Stack Developer with a minimum of 1+ year's hands-on experience in Node.js and Vue.js (or React/Angular). You will be instrumental in building cutting-edge, AI-powered products.
Team-Specific Focus Areas:
Conversations AI:
-Develop AI solutions for appointment booking, forms filling, sales, and intent recognition
-Ensure seamless integration and interaction with users through natural language processing and understanding
Workflows AI:
-Create and optimize AI-powered workflows to automate and streamline business processes
Voice AI:
-Focus on VOIP technology with an emphasis on low latency and high-quality voice interactions
-Fine-tune voice models for clarity, accuracy, and naturalness in various applications
Support AI:
-Integrate AI solutions with FreshDesk and ClickUp to enhance customer support and ticketing systems
-Develop tools for automated response generation, issue resolution, and workflow management
Platform AI:
-Oversee AI training, billing, content generation, funnels, image processing, and model evaluations
-Ensure scalable and efficient AI models that meet diverse platform needs and user demands
Responsibilities:
- REST APIs - Understanding REST philosophy. Writing secure, reusable, testable, and efficient APIs.
- Database - Designing collection schemas, and writing efficient queries
- Frontend - Developing user-facing features and integration with REST APIs
- UI/UX - Being consistent with the design principles and reusing components wherever possible
- Communication - With other team members, product team, and support team
Requirements:
- Expertise with large scale Conversation Agents along with Response Evaluations
- Good hands-on experience with Node.Js and Vue.js (or React/Angular)
- Experience of working with production-grade applications which a decent usage
- Bachelor's degree or equivalent experience in Engineering or related field of study
- 3+ years of engineering experience
- Expertise with MongoDB
- Proficient understanding of code versioning tools, such as Git
- Strong communication and problem-solving skills
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
- Strong knowledge in Kafka development and architecture.
- Hands-on experience on KSQL Database.
- Very good communication, analytical & problem-solving. skills.
- Proven hands-on Development experience Kafka platforms, lenses, confluent.
- Strong knowledge of the framework (Kafka Connect).
- Very comfortable with Shell scripting & Linux commands.
- Experience in DB2 database
Client is a Customer Data Platform-led personalization and real-time marketing automation solution that delivers superior customer experiences resulting in increased conversions, retention, and growth for enterprises.
2-4 years of relevant experience with Algorithms, Data Structures, & Optimizations in addition
to Coding.
● Education: B.E/B-Tech/M-Tech/M.S/MCA Computer Science or Equivalent from premier
institutes only
Skill Set
● Good Aptitude/Analytical skills (emphasis will be on Algorithms, Data Structures,&
Optimizations in addition to Coding)
● Good System design and Class design
● Good knowledge of Databases (Both SQL/NOSQL)
● Good knowledge of Kafka, Streaming Systems
● Good Knowledge of Java, Unit Testing
Soft Skills
● Has appreciation of technology and its ability to create value in the CDP domain
● Excellent written and verbal communication skills
● Active & contributing team member
● Strong work ethic with demonstrated ability to meet and exceed commitments
Others: Experience of having worked in a start-up is a plus
Your Opportunity Join our dynamic team as a Full Stack Software Dev, where you'll work at the intersection of innovation and leadership. You'll be part of a passionate group of engineers dedicated to building cutting-edge SaaS solutions that solve real customer challenges. This role is perfect for an experienced engineer who thrives on managing teams, collaborating with leadership, and driving product development. You’ll work directly with the CEO and senior architects, ensuring that our products meet the highest design and performance standards.
Key Responsibilities
- Lead, manage, and mentor a team of engineers to deliver scalable, high-performance solutions.
- Coordinate closely with the CEO and product leadership to align on goals and drive the vision forward.
- Collaborate with distributed teams to design, build, and refine core product features that serve a global audience.
- Stay hands-on with coding and architecture, driving key services and technical initiatives from end to end.
- Troubleshoot, debug, and optimize existing systems to ensure smooth product operations.
- Requirements & Technical Skills
- Bachelor's/Master’s/PhD in Computer Science, Engineering, or related fields (B.Tech, M.Tech, B.CA, B.E./M.E).
- 4 to 8 years of hands-on experience as a software developer, ideally in a SaaS environment.
- Proven track record in developing scalable, distributed systems and services.
- Solid understanding of the Software Development Lifecycle (SDLC).
- Strong programming experience in Spring & Hibernate with Kotlin, React, Nest.js, Python, and Shell scripting.
- Expertise in Unix-based systems, container technologies, and virtual machines.
- Knowledge of both relational and non-relational databases (MySQL, PostgreSQL, MongoDB, DocumentDB). Preferred Qualifications
- Familiarity with Agile methodologies.
- Experience working on both structured and unstructured data sources. Soft Skills
- Strong leadership, coaching, and mentoring capabilities to inspire and guide a team of engineers.
- Excellent communication skills, with the ability to present complex technical concepts clearly to non-technical stakeholders.
- Adaptable to new technologies in a fast-paced environment.
Job description:
- Hands on skills in Java programming languages
- Experience of testing of Cloud Native applications with exposure of Kafka.
- Understanding the concepts of K8, Caching, Rest / GRPC and Observability
- Experience with good programming or scripting practices and tools: code review, ADO/Jenkin etc
- Apply expertise in Java, API Testing, Cucumber or other test frameworks to design, develop and maintain automation test suites.
- Intimate familiarity with QA concepts: white-/black-/grey-box testing, acceptance/regression test, system integration test, performance/stress test, and security tests
Job Summary:
Senior Java developer will be responsible for many duties throughout the development lifecycle of applications, from concept and design right through to testing.
Duties/Responsibilities:
- To support and maintain existing Java code base, debug the application
- To analyse user and business requirements and design and implement appropriate solutions
- To design and code programs following in-house standards and good design principles
- To ensure that all programs are documented to the company standards
- To create unit test plans and perform unit testing of the programs
- To provide advice and guidance to other members of the team
Required Skills/Abilities:
- Hands on experience in designing and developing applications using Java EE platforms
- Object Oriented analysis and design using common design patterns
- Good knowledge of Relational Databases, SQL and ORM technologies (JPA2, Hibernate)
- Experience in the Spring Framework
- Experience in developing web applications using at least one popular web framework (JSF, Wicket, GWT, Spring MVC)
- Experience in RESTFul webservices
- Experience with test-driven development
- Exposure to portal/mobility development - Desired
- Exposure to any of middleware solutions like MQ, Oracle fusion middleware(WebLogic), WebSphere, Open Source
Key Result Areas :
● Communication skills and clearness in your reporting and communication.
● Knowledge in the Java programming languages you use.
● Knowledge in the Spring Framework and libraries you use.
● Knowledge in the tool-sets you use.
● Analytical thinking and experience (practical when you design the architecture of the
“thing” prior to coding it).
● Technological understanding (ability to see your new “thing” in a wider perspective, for
example how a small library fits into a large project or product).
● Creativity (finding better ways to achieve your project goals).
● Coding (testable code, clean reusable code, maintainable code, readable code, bug-
free code, beautiful code).
● Correctness (few bugs, few iterations with refactoring).
● Learning (your ability to learn about and use new technologies, protocols, libraries, or
even languages as needed).
● Durability (to stay on track no matter what, even when you feel dead bored, or in way
over your head).
● Adherence to Effort and Schedule
● Team hand holding for day to day activities with team and monitor their progress
● Lead the team technically for the on time delivery and best efforts.
Essentials Skills:
● Strong Hands-on experience in Core Java, Spring framework, Maven, Rational Database.
● Comfortable with source code repository Github.
● Experience in developing REST APIs using Spring-MVC, Play Framework.
● Good to have No Sql, Neo4J, Cassandra, Elasticsearch.
● Experience in developing apache samza jobs (optional).
● Good understanding of CI-CD pipeline.
at Lean Technologies
About Us
Lean provides developers with a universal API to access their customer's financial accounts from across the Middle East. We realized that infrastructure barriers to entry were suppressing the growth of the fintech industry across our home markets and sought to build a product that would solve that once and for all. Using Lean, developers of all levels of sophistication are now capable of building cutting-edge financial solutions without worrying about the nuances at the infrastructure layer, freeing them up to focus on what matters most to their customers.
About the role
Our core product aligns with and empowers developers to build the financial applications they are passionate about. As a Senior Software Engineer, you will take a leading role developing and building the product line that will underpin the future of the financial ecosystem. We are looking for a highly motivated, resilient problem-solver. Someone who seeks out challenges and is ready to implement compelling solutions to complex problems. Your role will consist of knowledge sharing across the entire engineering team, and mentoring and coaching individuals as well.
Requirements
- 5+ years of professional experience in Java, Spring Boot, and Microservices, along with proficiency in PostgreSQL and REST APIs.
- Having an understanding of how to design and develop complex features
- Experience working in a distributed team
- Experience working on open source APIs
- Desire to learn new technologies and implement them
- Knowledge of Kafka, Redis and No SQL databases
- To be self-motivated and comfortable with autonomy, with a desire to take complete ownership of the product from inception to deployment
- Interest in the Fintech industry, especially related to Open Banking
Bonus
- Experience working in the financial sector
- Experience working in startup environments
- Experience developing payment solutions (p2p, B2c, B2B)
- Experience on CI/CD and DevOps systems and processes
What Excites you?
- Freedom and the opportunity to build and experiment with new things.
- Being a part of a team that is revamping their codebases and implementing best practices.
- Joining a team that is elevating their code to the next level while adhering to best practices.
- Having the ability to experiment with new ideas and collaborate with like-minded people.
What's in it for you?
- Competitive salary and benefits package
- The opportunity to work on a product that aligns with and empowers developers to build the financial applications they are passionate about
- The chance to work with a team of talented, dedicated professionals who are passionate about the fintech industry
- As one of the first hires in Pune, you will join a dedicated and talented team that is deeply passionate about the fintech industry.
- You will have the opportunity to play a crucial role in setting the tone and culture for our expanding operations in Pune.
at Freestone Infotech Pvt. Ltd.
Core Experience:
•Experience in Core Java, J2EE, Spring/Spring Boot, Hibernate, Spring REST, Linux, JUnit, Maven, Design Patterns.
• Sound knowledge of RDBMS like MySQL/Postgres, including schema design.
• Exposure to Linux environment.
• Exposure to Docker and Kubernetes.
• Basic Knowledge of Cloud Services of AWS, Azure, GCP cloud provider.
• Proficient in general programming, logic, problem solving, data structures & algorithms
• Good analytical, grasping and problem-solving skills.
Secondary Skills:
• Agile / Scrum Development Experience preferred.
• Comfortable working with a microservices architecture and familiarly with NoSql solutions.
• Experience in Test Driven Development.
• Excellent written and verbal communication skills.
• Hands-on skills in configuration of popular build tools, like Maven and Gradle
• Good knowledge of testing frameworks such as JUnit.
• Good knowledge of coding standards, source code organization and packaging/deploying.
• Good knowledge of current and emerging technologies and trends.
Job Responsibilities:
• Design, Development and Delivery of Java based enterprise-grade applications.
• Ensure best practices, quality and consistency within various design and development phases.
• Develop, test, implement and maintain application software working with established processes.
Work with QA and help them for test automation
• Work with Technical Writer and help them documenting the features you have developing.
Education and Experience:
• Bachelor’s / master’s degree in computer science or information technology or related field
As a Kafka Administrator at Cargill you will work across the full set of data platform technologies spanning on-prem and SAS solutions empowering highly performant modern data centric solutions. Your work will play a critical role in enabling analytical insights and process efficiencies for Cargill’s diverse and complex business environments. You will work in a small team who shares your passion for building, configuring, and supporting platforms while sharing, learning and growing together.
- Develop and recommend improvements to standard and moderately complex application support processes and procedures.
- Review, analyze and prioritize incoming incident tickets and user requests.
- Perform programming, configuration, testing and deployment of fixes or updates for application version releases.
- Implement security processes to protect data integrity and ensure regulatory compliance.
- Keep an open channel of communication with users and respond to standard and moderately complex application support requests and needs.
MINIMUM QUALIFICATIONS
- 2-4 year of minimum experience
- Knowledge of Kafka cluster management, alerting/monitoring, and performance tuning
- Full ecosystem Kafka administration (kafka, zookeeper, kafka-rest, connect)
- Experience implementing Kerberos security
- Preferred:
- Experience in Linux system administration
- Authentication plugin experience such as basic, SSL, and Kerberos
- Production incident support including root cause analysis
- AWS EC2
- Terraform
Job Summary
5-6 years of proven and progressive experience using Java, Spring Boot Web, OOP. Experience working with Kafka, Docker, Kubernetes CLI. A strong understanding of Object orientation, SOLID principles, Clean coding, and patterns is desirable.
Job Requirements
● Experience working with Java, Spring Boot Web, Gradle, OOP.
● Experience working with Kafka, Docker, Kubernetes CLI.
● Great OO skills, including strong design patterns knowledge. Understanding functional programming would be a plus.
● Strong understanding of TDD is desirable.
● You have led software development teams using Agile, Lean, and/or Continuous Delivery approaches such as TDD, continuous integration, pairing, and infrastructure automation.
● Passion for software engineering and craftsman-like coding process.
● Experience in developing applications integrating with RDBMS like Postgres.
Job Responsibilities
● You will use continuous delivery practices to deliver high-quality software as well as value to end customers as early as possible.
● You will work in collaborative, value-driven teams to build innovative customer experiences for our clients.
● Create large-scale distributed systems out of microservices.
● Efficiently utilize DevOps tools and practices to build and deploy software.
● You will oversee or take part in the entire cycle of software consulting and delivery from ideation to deployment and everything in between.
● You will act as a mentor for less-experienced peers through both your technical knowledge and leadership skills.
Perks
● Work-Life Balance
● Skill Development
● Object Bootcamp
● Sabbatical Leave
● Parental Leaves
● Office Perks (Free Meal, Snacks)
● Challenging work
Culture
● Open Culture
● Flat Hierarchy
● 360-degree feedback
● Mentorship Program
● People Supportive
● Competitive & Friendly Environment
at PortOne
PortOne is re−imagining payments in Korea and other international markets. We are a Series B funded startup backed by prominent VC firms Softbank and Hanwa Capital
PortOne provides a unified API for merchants to integrate with and manage all of the payment options available in Korea and SEA Markets - Thailand, Singapore, Indonesia etc. It's currently used by 2000+ companies and processing multi-billion dollars in annualized volume. We are building a team to take this product to international markets, and looking for engineers with a passion for fintech and digital payments.
Culture and Values at PortOne
- You will be joining a team that stands for Making a difference.
- You will be joining a culture that identifies more with Sports Teams rather than a 9 to 5 workplace.
- This will be remote role that allows you flexibility to save time on commute
- Your will have peers who are/have
- Highly Self Driven with A sense of purpose
- High Energy Levels - Building stuff is your sport
- Ownership - Solve customer problems end to end - Customer is your Boss
- Hunger to learn - Highly motivated to keep developing new tech skill sets
Who you are ?
* You are an athlete and Devops/DevSecOps is your sport.
* Your passion drives you to learn and build stuff and not because your manager tells you to.
* Your work ethic is that of an athlete preparing for your next marathon. Your sport drives you and you like being in the zone.
* You are NOT a clockwatcher renting out your time, and NOT have an attitude of "I will do only what is asked for"
* Enjoys solving problems and delight users both internally and externally
* Take pride in working on projects to successful completion involving a wide variety of technologies and systems
* Posses strong & effective communication skills and the ability to present complex ideas in a clear & concise way
* Responsible, self-directed, forward thinker, and operates with focus, discipline and minimal supervision
* A team player with a strong work ethic
Experience
* 2+ year of experience working as a Devops/DevSecOps Engineer
* BE in Computer Science or equivalent combination of technical education and work experience
* Must have actively managed infrastructure components & devops for high quality and high scale products
* Proficient knowledge and experience on infra concepts - Networking/Load Balancing/High Availability
* Experience on designing and configuring infra in cloud service providers - AWS / GCP / AZURE
* Knowledge on Secure Infrastructure practices and designs
* Experience with DevOps, DevSecOps, Release Engineering, and Automation
* Experience with Agile development incorporating TDD / CI / CD practices
Hands on Skills
* Proficient in atleast one high level Programming Language: Go / Java / C
* Proficient in scripting - bash scripting etc - to build/glue together devops/datapipeline workflows
* Proficient in Cloud Services - AWS / GCP / AZURE
* Hands on experience on CI/CD & relevant tools - Jenkins / Travis / Gitops / SonarQube / JUnit / Mock frameworks
* Hands on experience on Kubenetes ecosystem & container based deployments - Kubernetes / Docker / Helm Charts / Vault / Packer / lstio / Flyway
* Hands on experience on Infra as code frameworks - Terraform / Crossplane / Ansible
* Version Control & Code Quality: Git / Github / Bitbucket / SonarQube
* Experience on Monitoring Tools: Elasticsearch / Logstash / Kibana / Prometheus / Grafana / Datadog / Nagios
* Experience with RDBMS Databases & Caching services: Postgres / MySql / Redis / CDN
* Experience with Data Pipelines/Worflow tools: Airflow / Kafka / Flink / Pub-Sub
* DevSecOps - Cloud Security Assessment, Best Practices & Automation
* DevSecOps - Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Preferrable to have Devops/Infra Experience for products in Payments/Fintech domain - Payment Gateways/Bank integrations etc
What will you do ?
Devops
* Provisioning the infrastructure using Crossplane/Terraform/Cloudformation scripts.
* Creating and Managing the AWS EC2, RDS, EKS, S3, VPC, KMS and IAM services, EKS clusters & RDS Databases.
* Monitor the infra to prevent outages/downtimes and honor our infra SLAs
* Deploy and manage new infra components.
* Update and Migrate the clusters and services.
* Reducing the cloud cost by enabling/scheduling for less utilized instances.
* Collaborate with stakeholders across the organization such as experts in - product, design, engineering
* Uphold best practices in Devops/DevSecOps and Infra management with attention to security best practices
DevSecOps
* Cloud Security Assessment & Automation
* Modify existing infra to adhere to security best practices
* Perform Threat Modelling of Web/Mobile applications
* Integrate security testing tools (SAST, DAST) in to CI/CD pipelines
* Incident management and remediation - Monitoring security incidents, recovery from and remediation of the issues
* Perform frequent Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Ensure the environment is compliant to CIS, NIST, PCI etc.
Here are examples of apps/features you will be supporting as a Devops/DevSecOps Engineer
* Intuitive, easy-to-use APIs for payment process.
* Integrations with local payment gateways in international markets.
* Dashboard to manage gateways and transactions.
* Analytics platform to provide insights
· IMMEDIATE JOINER
Professional Experience with 5+ years in Confluent Kafka Admin
· Demonstrated experience design / development.
· Must have proven knowledge and practical application of – Confluent Kafka (Producers/ Consumers / Kafka Connectors / Kafka Stream/ksqlDB/Schema Registry)
· Experience in performance optimization of consumers, producers.
· Good experience debugging issues related offset, consumer lag, partitions.
· Experience with Administrative tasks on Confluent Kafka.
· Kafka admin experience including but not limited to setup new Kafka cluster, create topics, grant permissions, offset reset, purge data, setup connectors, setup replicator task, troubleshooting issues, Monitor Kafka cluster health and performance, backup and recovery.
· Experience in implementing security measures for Kafka clusters, including access controls and encryption, to protect sensitive data.
· Install/Upgrade Kafka cluster techniques.
· Good experience with writing unit tests using Junit and Mockito
· Have experience with working in client facing project.
· Exposure to any cloud environment like AZURE is added advantage.
· Experience in developing or working on REST Microservices
Experience in Java, Springboot is a plus
Key Result Areas :
● Communication skills and clearness in your reporting and communication.
● Knowledge in the Java programming languages you use.
● Knowledge in the Spring Framework and libraries you use.
● Knowledge in the tool-sets you use.
● Analytical thinking and experience (practical when you design the architecture of the
“thing” prior to coding it).
● Technological understanding (ability to see your new “thing” in a wider perspective, for
example how a small library fits into a large project or product).
● Creativity (finding better ways to achieve your project goals).
● Coding (testable code, clean reusable code, maintainable code, readable code, bug-
free code, beautiful code).
● Correctness (few bugs, few iterations with refactoring).
● Learning (your ability to learn about and use new technologies, protocols, libraries, or
even languages as needed).
● Durability (to stay on track no matter what, even when you feel dead bored, or in way
over your head).
● Adherence to Effort and Schedule
● Team hand holding for day to day activities with team and monitor their progress
● Lead the team technically for the on time delivery and best efforts.
Essentials Skills:
● Strong Hands-on experience in Core Java, Spring framework, Maven, Rational Database.
● Comfortable with source code repository Github.
● Experience in developing REST APIs using Spring-MVC, Play Framework.
● Good to have No Sql, Neo4J, Cassandra, Elasticsearch.
● Experience in developing apache samza jobs (optional).
● Good understanding of CI-CD pipeline.
Radisys Corporation is looking for JAVA Backend developers with 6-10 years of experience for their Bangalore location.
The ideal candidate will be able to design and develop code for tasks after brainstorming sessions and applying best practices and coding conventions.
This position requires experience in Java, Spring, Spring Boot, microservices, message broker, and DB knowledge. Candidates should be skilled in developing enterprise applications that consist of FE, BE, and DB integration.
If you have experience with Docker and Kubernetes, that's an added advantage.
Radisys Corporation, a global leader in open telecom solutions, enables service providers to drive disruption with new open architecture business models. Our innovative technology solutions leverage open reference architectures and standards, combined with open software and hardware, to power business transformation for the telecom industry. Our services organization delivers systems integration expertise necessary to solve complex deployment challenges for communications and content providers.
Job Overview :
We are looking for a Lead Engineer - Java with a strong background in Java development and hands-on experience with J2EE, Springboot, Kubernetes, Microservices, NoSQL, and SQL. As a Lead Engineer, you will be responsible for designing and developing high-quality software solutions and ensuring the successful delivery of projects. role with 7 to 10 years of experience, based in Bangalore, Karnataka, India. This position is a full-time role with excellent growth opportunities.
Qualifications and Skills :
- Bachelor's or master's degree in Computer Science or a related field
- Strong knowledge of Core Java, J2EE, and Springboot frameworks
- Hands-on experience with Kubernetes and microservices architecture
- Experience with NoSQL and SQL databases
- Proficient in troubleshooting and debugging complex system issues
- Experience in Enterprise Applications
- Excellent communication and leadership skills
- Ability to work in a fast-paced and collaborative environment
- Strong problem-solving and analytical skills
Roles and Responsibilities :
- Work closely with product management and cross-functional teams to define requirements and deliverables
- Design scalable and high-performance applications using Java, J2EE, and Springboot
- Develop and maintain microservices using Kubernetes and containerization
- Design and implement data models using NoSQL and SQL databases
- Ensure the quality and performance of software through code reviews and testing
- Collaborate with stakeholders to identify and resolve technical issues
- Stay up-to-date with the latest industry trends and technologies
Job Location: Pune
Experience: 4- 5 years
Functional Area - IT Software - Application Programming , Maintenance
Role Category : Programming & Design
Requirement / Job Description:
Core Skills:
Strong experience of Core Java (1.7 or higher), OOPS concepts and Spring framework (Core, AOP, Batch, JMS)
Demonstrated design using Web Services (SOAP and REST)
Demonstrated Microservices APIs design experience using Spring, Springboot
Demonstrable experience in Databases like MySQL, PostgreSQL, Oracle PL/SQL development etc
Strong coding skills, good analytical and problem-solving skills
Excellent understanding of Authentication, Identity Management, REST APIs, security and best practices
Good understanding of web servers like Tomcat Apache, nginx or Vertex/ Grizzly, JBoss etc
Experience in OAuth principles
Strong understanding of various Design patterns
Other Skills:
Familiarity with Java Cryptography Architecture (JCA)
Understanding of API Gateways like Zuul, Eureka Server etc..
Familiarity with Apache Kafka, MQTT etc.
Responsibilities:
Design, develop, test and debug software modules for an enterprise security product
Find areas of optimization and produce high quality code
Collaborate with product managers and other members of the project team in requirements specification and detailed engineering analysis.
Collaborate with various stake holders and help bring proactive closure on the issues
Evaluate various technology trends and bring in the best practices
Innovate and come out of the box solutions
Adapt, thrive and deliver in a highly evolving and demanding product development team
Come up with ways to provide an improved customer experience
About us
Astra is a cyber security SaaS company that makes otherwise chaotic pentests a breeze with its one of a kind Pentest Platform. Astra's continuous vulnerability scanner emulates hacker behavior to scan applications for 8300+ security tests. CTOs & CISOs love Astra because it helps them fix vulnerabilities in record time and move from DevOps to DevSecOps with Astra's CI/CD integrations.
Astra is loved by 650+ companies across the globe. In 2023 Astra uncovered 2 million+ vulnerabilities for its customers, saving customers $69M+ in potential losses due to security vulnerabilities.
We've been awarded by the President of France Mr. François Hollande at the La French Tech program and Prime Minister of India Shri Narendra Modi at the Global Conference on Cyber Security. Loom, MamaEarth, Muthoot Finance, Canara Robeco, ScripBox etc. are a few of Astra’s customers.
Role Overview
As an SDE 2 Back-end Engineer at Astra, you will play a crucial role in the development of a new vulnerability scanner from scratch. You will be architecting & engineering a scalable technical solution from the ground-up.
You will have the opportunity to work alongside talented individuals, collaborating to deliver innovative solutions and pushing the boundaries of what's possible in vulnerability scanning. The role requires deep collaboration with the founders, product, engineering & security teams.
Join our team and contribute to the development of a cutting-edge SaaS security platform, where high-quality engineering and continuous learning are at the core of everything we do.
Roles & Responsibilities:
- You will be joining our Vulnerability Scanner team which builds a security engine to identify vulnerabilities in technical infrastructure.
- You will be the technical product owner of the scanner, which would involve managing a lean team of backend engineers to ensure smooth implementation of the technical product roadmap.
- Research about security vulnerabilities, CVEs, and zero-days affecting cloud/web/API infrastructure.
- Work in an agile environment of engineers to architect, design, develop and build our microservice infrastructure.
- You will research, design, code, troubleshoot and support (on-call). What you create is also what you own.
- Writing secure, high quality, modular, testable & well documented code for features outlined in every sprint.
- Design and implement APIs in support of other services with a highly scalable, flexible, and secure backend using GoLang
- Hands-on experience with creating production-ready code & optimizing it by identifying and correcting bottlenecks.
- Driving strict code review standards among the team.
- Ensuring timely delivery of the features/products
- Working with product managers to ensure product delivery status is transparent & the end product always looks like how it was imagined
- Work closely with Security & Product teams in writing vulnerability detection rules, APIs etc.
Required Qualifications & Skills:
- Strong 2-4 years relevant development experience in GoLang
- Experience in building a technical product from idea to production.
- Design and build highly scalable and maintainable systems in Golang
- Expertise in Goroutines and Channels to write efficient code utilizing multi-core CPU optimally
- Must have hands-on experience with managing AWS/Google Cloud infrastructure
- Hands on experience in creating low latency high throughput REST APIs
- Write test suites and maintain code coverage above 80%
- Working knowledge of PostgreSQL, Redis, Kafka
- Good to have experience in Docker, Kubernetes, Kafka
- Good understanding of Data Structures, Algorithms and Operating Systems.
- Understanding of cloud/web security concepts would be an added advantage
What We Offer:
- Adrenalin rush of being a part of a fast-growing company
- Fully remote & agile working environment
- A wholesome opportunity in a fast-paced environment where you get to build things from scratch, improve and influence product design decisions
- Holistic understanding of SaaS and enterprise security business
- Opportunity to engage and collaborate with developers globally
- Experience with security side of things
- Annual trips to beaches or mountains (last one was Chikmangaluru)
- Open and supportive culture
Egen is a data engineering and cloud modernization firm helping industry-leading companies achieve digital breakthroughs and deliver for the future, today. We are catalysts for change who create digital breakthroughs at warp speed. Our team of cloud and data engineering experts are trusted by top clients in pursuit of the extraordinary. An Inc. 5000 Fastest Growing Company 7 times, and recently recognized on the Crain’s Chicago Business Fast 50 list, Egen has also been recognized as a great place to work 3 times.
You will join a team of insatiably curious data engineers, software architects, and product experts who never settle for "good enough". Our Java Platform team's tech stack is based on Java8 (Spring Boot) and RESTful web services. We typically build and deploy applications as cloud-native Kubernetes microservices and integrate with scalable technologies such as Kafka in Docker container environments. Our developers work in an agile process to efficiently deliver high value data driven applications and product packages.
Required Experience:
- Minimum of Bachelor’s Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
- Have experience working and strong understanding of object-oriented programing and cloud technologies
- End to end experience delivering production ready code with Java8, Spring Boot, Spring Data, and API libraries
- Strong experience with unit and integration testing of the Spring Boot APIs.
- Strong understanding and production experience of RESTful API's and microservice architecture.
- Strong understanding of SQL databases and NoSQL databases and experience with writing abstraction layers to communicate with the databases.
Nice to have's (but not required):
- Exposure to Kotlin or other JVM programming languages
- Strong understanding and production experience working with Docker container environments
- Strong understanding and production experience working with Kafka
- Cloud Environments: AWS, GCP or Azure
About the job
About Reflektive's Engineering Team
We are seeking a Senior Software Engineer, Front End to help scale Reflektive to being the market leader for employee performance management. The main question to be answered: Can you help a
company scale?
Reflektive has major initiatives to tackle in the next year. Initiatives range from internal scaling, security, engagement, new verticals, pervasive technologies, research and development, data and
analytics, and customer tools. Reflektive’s Senior Software Engineer will contribute in their area of specialization. S/he will help us solve complex design challenges and mature our platform to handle
increasing traffic and scale.
You'll join a lean, prolific team where everyone, including you, is active in the product-defining and development process (where deploying new features every 2 weeks is common). You'll know the
customers we're talking to, and the needs of each one. As a result, you know where your initiative and drive can best make a difference (and be recognized!)
Our engineering team consists of developers from a wide array of backgrounds. Our team primarily focuses on Rails and Javascript, but is always ready to use the best tool for the job when it makes
sense. Following Scrum practices, we work closely with the Product Management team to develop features that focus on empowering and developing employees. Our team is a tight knit, friendly
group of engineers that are dedicated to learning from and teaching to each other. Team members regularly contribute to and optimize our engineering practices and processes. Our team wants to
make software engineering fun, easy, and fulfilling, so we've come up with a set of values that we
apply to our software every day: Simple, Flexible, Consistent, Predictable, Efficient, and Pragmatic.
Responsibilities
● Depending on your specialization, projects/initiatives may include: Security, scaling distributed systems, working on our core services related to user management, building out new verticals, guiding new engagement features, scaling traffic/imports/exports,
and managing APIs.
● Work extremely cross-functionally across Engineering and Product Management.
● Deliverable: (30 days) Own a feature; possibly being paired with another engineer. (60days) Own and drive a new initiative. (90 days) Bring that initiative to production.
Desired Skills and Experience
- Expert proficiency in Java/Kotlin, Kafka/Pulsar, SQL, Docker and kubernetes.
- Overall 4+ years of experience as Java full stack developer using any modern frameworks.
- Strong knowledge on Data structure and Algorithm knowledge
- Previous experience working in ReactJS/AngularJS (2+ years)
- Knowledge of Analytics and LookerML would be added plus.
- Exposure to cloud environment AWS.
- Knowledge of unit testing frameworks including Junit.
- Startup experience is strongly desired.
- You learn quickly, you’re adaptable and versatile.
- Experience in an Agile and Scrum environment.
About Reflektive
Forward-thinking organizations use Reflektive’s people management platform to drive employee performance and development with Real-Time Feedback, Recognition, Check-Ins, Goal Management,
Performance Reviews, 1-on-1 Profiles, and Analytics. Reflektive’s more than 500 customers include Blue Origin, Comcast, Instacart, Dollar Shave Club, Healthgrades, Wavemaker Global, and Protective
Life. Backed by Andreessen Horowitz, Lightspeed Venture Partners, and TPG Growth, Reflektive has raised more than $100 million to date, and was ranked the 13th Fastest Growing Company in North
America on Deloitte’s 2018 Technology Fast 500TM.
We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status,
veteran status, or disability status.
WISSEN TECHNOLOGY is Hiring!!!!!
Java Developer with messaging experience (JMS/EMS/Kafka/RabbitMQ) And CI/CD.
Exp-3 to 6 yrs
Location-Pune|Mumbai|Bangalore
NP- Serving and less than 15 days only.
Requirement:
Core Java 8.0
Mandatory experience in any of the messaging technologies like JMS/EMS/Kafka/RabbitMQ
Extensive experience in developing enterprise-scale n-tier applications for the financial domain.
Should possess good architectural knowledge and be aware of enterprise application design patterns.
Should have the ability to analyze, design, develop and test complex, low-latency client-facing applications.
Mandatory development experience on CI/CD platform.
Good knowledge of multi-threading and high-volume server side development
Experience in sales and trading platforms in investment banking/capital markets
Basic working knowledge of Unix/Linux.
Strong written and oral communication skills. Should have the ability to express their design ideas and thoughts.
Egen is a data engineering and cloud modernization firm helping industry-leading companies achieve digital breakthroughs and deliver for the future, today. We are catalysts for change who create digital breakthroughs at warp speed. Our team of cloud and data engineering experts are trusted by top clients in pursuit of the extraordinary. An Inc. 5000 Fastest Growing Company 7 times, and recently recognized on the Crain’s Chicago Business Fast 50 list, Egen has also been recognized as a great place to work 3 times.
You will join a team of insatiably curious data engineers, software architects, and product experts who never settle for "good enough". Our Java Platform team's tech stack is based on Java8 (Spring Boot) and RESTful web services. We typically build and deploy applications as cloud-native Kubernetes microservices and integrate with scalable technologies such as Kafka in Docker container environments. Our developers work in an agile process to efficiently deliver high value data driven applications and product packages.
Required Experience:
- Minimum of Bachelor’s Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
- Have experience working and strong understanding of object-oriented programing and cloud technologies
- End to end experience delivering production ready code with Java8, Spring Boot, Spring Data, and API libraries
- Strong experience with unit and integration testing of the Spring Boot APIs.
- Strong understanding and production experience of RESTful API's and microservice architecture.
- Strong understanding of SQL databases and NoSQL databases and experience with writing abstraction layers to communicate with the databases.
Nice to have's (but not required):
- Exposure to Kotlin or other JVM programming languages
- Strong understanding and production experience working with Docker container environments
- Strong understanding and production experience working with Kafka
- Cloud Environments: AWS, GCP or Azure
Job Description: Full Stack Developer Company: Arroz Technology Private Limited CTC: 5 LPA
Location : Bangalore (Onsite)
Responsibilities:
- Design and develop scalable and high-performance web applications using the
MERN (MongoDB, Express.js, React.js, Node.js) stack.
- Collaborate with cross-functional teams to gather requirements and translate them into high-level designs.
- Write clean, reusable, and well-structured code following industry best practices and coding standards.
- Mentor and guide junior developers, providing technical expertise and promoting Professional growth.
- Conduct code reviews and provide constructive feedback to ensure code quality and adherence to standards.
- Collaborate with frontend and backend developers to integrate components and ensure smooth data flow.
- Work with UI/UX designers to implement responsive and user-friendly interfaces.
- Stay updated with the latest trends and advancements in full-stack development technologies.
- Work in a 10 AM to 6 PM, six-day office role, maintaining regular attendance and punctuality.
Required Skills and Qualifications:
-Strong proficiency in MERN (MongoDB, Express.js, React.js, Node.js) stack development.
-Experience with Redux or similar state management libraries.
-Solid understanding of front-end technologies such as HTML, CSS, and JavaScript.
-Proficiency in RESTful API development and integration.
-Familiarity with version control systems like Git and agile development methodologies.
-Good problem-solving and debugging skills.
-Excellent communication and teamwork abilities.
-Bachelor's degree in Computer Science or a related field (preferred).
Join Arroz Technology Private Limited as a Full Stack Developer and contribute to the development of cutting-edge web applications. This role offers competitive compensation and growth opportunities within a dynamic work environment.
Role : Senior Engineer Infrastructure
Key Responsibilities:
● Infrastructure Development and Management: Design, implement, and manage robust and scalable infrastructure solutions, ensuring optimal performance,security, and availability. Lead transition and migration projects, moving legacy systemsto cloud-based solutions.
● Develop and maintain applications and services using Golang.
● Automation and Optimization: Implement automation tools and frameworksto optimize operational processes. Monitorsystem performance, optimizing and modifying systems as necessary.
● Security and Compliance: Ensure infrastructure security by implementing industry best practices and compliance requirements. Respond to and mitigate security incidents and vulnerabilities.
Qualifications:
● Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
● Good understanding of prominent backend languageslike Golang, Python, Node.js, or others.
● In-depth knowledge of network architecture,system security, infrastructure scalability.
● Proficiency with development tools,server management, and database systems.
● Strong experience with cloud services(AWS.), deployment,scaling, and management.
● Knowledge of Azure is a plus
● Familiarity with containers and orchestration services,such as Docker, Kubernetes, etc.
● Strong problem-solving skills and analytical thinking.
● Excellent verbal and written communication skills.
● Ability to thrive in a collaborative team environment.
● Genuine passion for backend development and keen interest in scalable systems.
Job Description:
Organization - Prolifics Corporation
Skill - Java developer
Job type - Full time/Permanent
Location - Bangalore/Mumbai
Experience - 5 to 10 Years
Notice Period – Immediate to 30 Days
Required Skillset:
Spring framework concepts, Spring boot(Mandatory)
Spring batch and dashboard
Apache Kafka(Mandatory)
Azure (Mandatory)
GIT / Maven / Griddle / CI/CD
MS SQL database
Cloud and Data Exposure
Docker, Orchestration using Kubernetes
Genesys pure cloud or any cloud-based contact center platform that can be used to manage customer interactions.
Technical Experience:
The candidate should have 5+ years of experience, preferably at technology or financial firm.
Must have at least 2- 3 years of experience in spring batch / java / Kafka / SQL
Must have hands on experience in database tools and technologies.
Must have exposure to CI / CD and Cloud.
Work scope
Build the spring batch framework to pull the required data from Genesys
Cloud to MS reporting data storage – on prem / Cloud.
Build MS WM Contact Center Data Hub (on Prem / Cloud)
Build dashboard to monitor and manage the data injection, fusion jobs.
Event bridge implementation for real time data ingestion and monitoring
MS Private Cloud
is a software product company that provides
5+ years of experience designing, developing, validating, and automating ETL processes 3+ years of experience traditional ETL tools such as Visual Studio, SQL Server Management Studio, SSIS, SSAS and SSRS 2+ years of experience with cloud technologies and platforms, such as: Kubernetes, Spark, Kafka, Azure Data Factory, Snowflake, ML Flow, Databricks, Airflow or similar Must have experience with designing and implementing data access layers Must be an expert with SQL/T-SQL and Python Must have experience in Kafka Define and implement data models with various database technologies like MongoDB, CosmosDB, Neo4j, MariaDB and SQL Serve Ingest and publish data from sources and to destinations via an API Exposure to ETL/ELT with using Kafka or Azure Event Hubs with Spark or Databricks is a plus Exposure to healthcare technologies and integrations for FHIR API, HL7 or other HIE protocols is a plus
Skills Required :
Designing, Developing, ETL, Visual Studio, Python, Spark, Kubernetes, Kafka, Azure Data Factory, SQL Server, Airflow, Databricks, T-SQL, MongoDB, CosmosDB, Snowflake, SSIS, SSAS, SSRS, FHIR API, HL7, HIE Protocols
Key roles:
· Develop backend application with clean code practices foe the application
· Collaborate with frontend developers to integrate user-facing elements with server side logic
· Troubleshoot and debug application
· Gather and address technical and design requirements
· Build reusable code and libraries for future user-facing
· Communicate with other third-party team for collaborations
Skillset:
· Core Java - Hands on experience with Jdk 11 and above
· Experience with code hosting and collaboration tools like Bitbucket/Github/GitLab
· Microservice architecture - Rest API calls, inter-service exception handling
· Non-relational DB - MongoDB basic DB commands to insert, update, delete, find records and Indexing
· Springboot framework - Spring Data JPA, ORM
· Event driven architecture - Kafka
· Tools like Postman, Jenkins, Doppler, IDE, MongoDB atlas
Qualifications:
· UG: B.Tech/B.E. in Any Specialization, B.Sc in Any Specialization, BCA in Any Specialization
· PG: Any Postgraduate
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Roles and Responsibilities:
- Proven experience in Java 8, Spring Boot, Microservices and API
- Strong experience with Kafka, Kubernetes
- strong experience in using RDBMS (Mysql) and NoSQL.
- Experience working in Eclipse or Maven environments
- Hands-on experience in Unix and Shell scripting
- hands-on experience in fine-tuning application response and performance testing.
- experience in Web Services.
- Strong analysis and problem-solving skills
- Strong communication skills, both verbal and written
- Ability to work independently with limited supervision
- Proven ability to use own initiative to resolve issues
- Full ownership of projects and tasks
- Ability and willingness to work under pressure, on multiple concurrent tasks, and to deliver to agreed deadlines
- Eagerness to learn
- Strong team-working skills
Required a full stack Senior SDE with focus on Backend microservices/ modular monolith with 3-4+ years of experience on the following:
- Bachelor’s or Master’s degree in Computer Science or equivalent industry technical skills
- Mandatory In-depth knowledge and strong experience in Python programming language.
- Expertise and significant work experience in Python with Fastapi and Async frameworks.
- Prior experience building Microservice and/or modular monolith.
- Should be an expert Object Oriented Programming and Design Patterns.
- Has knowledge and experience with SQLAlchemy/ORM, Celery, Flower, etc.
- Has knowledge and experience with Kafka / RabbitMQ, Redis.
- Experience in Postgres/ Cockroachdb.
- Experience in MongoDB/DynamoDB and/or Cassandra are added advantages.
- Strong experience in either AWS services (e.g, EC2, ECS, Lambda, StepFunction, S3, SQS, Cognito). and/or equivalent Azure services preferred.
- Experience working with Docker required.
- Experience in socket.io added advantage
- Experience with CI/CD e.g. git actions preferred.
- Experience in version control tools Git etc.
This is one of the early positions for scaling up the Technology team. So culture-fit is really important.
- The role will require serious commitment, and someone with a similar mindset with the team would be a good fit. It's going to be a tremendous growth opportunity. There will be challenging tasks. A lot of these tasks would involve working closely with our AI & Data Science Team.
- We are looking for someone who has considerable expertise and experience on a low latency highly scaled backend / fullstack engineering stack. The role is ideal for someone who's willing to take such challenges.
- Coding Expectation – 70-80% of time.
- Has worked with enterprise solution company / client or, worked with growth/scaled startup earlier.
- Skills to work effectively in a distributed and remote team environment.
About Quadratyx:
We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.
We firmly believe in Excellence Everywhere.
Job Description
Purpose of the Job/ Role:
• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.
Key Requisites:
• Expertise in Data structures and algorithms.
• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.
• Scaling of cloud-based infrastructure.
• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.
• Led and mentored a team of data engineers.
• Hands-on experience in test-driven development (TDD).
• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.
• Good knowledge of Kafka and Spark Streaming internal architecture.
• Good knowledge of any Application Servers.
• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.
• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc.
Skills/ Competencies Required
Technical Skills
• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.
• Clear end-to-end experience in designing, programming, and implementing large software systems.
• Passion and analytical abilities to solve complex problems Soft Skills.
• Always speaking your mind freely.
• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.
• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.
Academic Qualifications & Experience Required
Required Educational Qualification & Relevant Experience
• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.
• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.
EXPERTISE AND QUALIFICATIONS
- 14+ years of experience in Software Engineering with at least 6+ years as a Lead Enterprise Architect preferably in a software product company
- High technical credibility - ability to lead technical brainstorming, take decisions and push for the best solution to a problem
- Experience in architecting Microservices based E2E Enterprise Applications
- Experience in UI technologies such as Angular, Node.js or Fullstack technology is desirable
- Experience with NoSQL technologies (MongoDB, Neo4j etc.)
- Elastic Search, Kibana, ELK, Logstash.
- Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc.
- Exposure in SaaS cloud-based platform.
- Experience on Docker, Kubernetes etc.
- Experience in planning, designing, developing and delivering Enterprise Software using Agile Methodology
- Key Programming Skills: Java, J2EE with cutting edge technologies
- Hands-on technical leadership with proven ability to recruit and mentor high performance talents including Architects, Technical Leads, Developers
- Excellent team building, mentoring and coaching skills are a must-have
- A proven track record of consistently setting and achieving high standards
Five Reasons Why You Should Join Zycus
1. Cloud Product Company: We are a Cloud SaaS Company, and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React.
2. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites.
3. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization
4. Get a Global Exposure: You get to work and deal with our global customers.
5. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features.
About Us
Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users.
Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-
to-use user interface ensures high adoption and value across the organization.
Start your #CognitiveProcurement journey with us, as you are #MeantforMore
About Us :
Markowate is a digital product development company building digital products on AI, Blockchain, Mobile, and Web3 digital. We work with tech startups as their technical partner and help them with their digital transformation.
Role Overview: As a Solution Architect, you will collaborate with stakeholders, including business executives, project managers, and software development teams, to understand the organization's objectives and requirements. You will then design scalable and efficient software solutions that align with these requirements. Your role involves assessing technologies, creating architecture designs, overseeing the development process, and ensuring the successful implementation of the solutions.
"Note: Should have 9+ years of relevant experience. Must have worked with Node.js technology."
Responsibilities:
- Collaborate with stakeholders to understand and analyze business and technical requirements, and translate them into scalable and feasible solution designs.
- Develop end-to-end solution architectures, considering factors such as system integration, scalability, performance, security, and reliability.
- Research and evaluate new technologies, frameworks, and platforms to determine their suitability for the organization's needs.
- Provide technical guidance and support to development teams throughout the software development life cycle (SDLC) to ensure adherence to the architectural vision and best practices.
- Effectively communicate complex technical concepts to non-technical stakeholders, such as executives and project managers, and provide recommendations on technology-related decisions.
- Identify and mitigate technical risks by proactively identifying potential issues and developing contingency plans.
- Collaborate with quality assurance teams to define and implement testing strategies that validate the solution's functionality, performance, and security.
- Create and maintain architectural documentation, including diagrams, technical specifications, and design guidelines, to facilitate efficient development and future enhancements.
- Stay up-to-date with industry trends, best practices, and emerging technologies to drive innovation and continuous improvement within the organization.
Requirements:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Should have 9+ years of experience and relevant experience as a solution architect.
- Must have experience in Node.Js and should have deep understanding in AI/ML and data pipelines.
- Proven experience as a Solution Architect or a similar role, with a strong background in software development.
- In-depth knowledge of software architecture patterns, design principles, and development methodologies.
- Proficiency in various programming languages and frameworks.
- Strong problem-solving and analytical skills, with the ability to think strategically and propose innovative solutions.
- Excellent communication and presentation skills, with the ability to convey complex technical concepts to both technical and non-technical stakeholders.
- Experience in cloud computing platforms, such as AWS, Azure, or Google Cloud, and understanding of their services and deployment models.
- Familiarity with DevOps practices, continuous integration/continuous deployment (CI/CD) pipelines, and containerization technologies like Docker and Kubernetes.
at Mobile Programming LLC
- Role: IoT Application Development (Java) Skill Set:
- Proficiency in Java 11.
- Strong knowledge of Spring Boot framework.
- Experience with Kubernetes.
- Familiarity with Kafka.
- Understanding of Azure Cloud services.
1 Experience: 3 to 5 years Location: Bangalore ; Notice period : Immediate Joiners
- Job Description: We are seeking an experienced IoT Application Developer with expertise in Java to join our team in Bangalore. As a Java Developer, you will be responsible for designing, developing, and deploying IoT applications. You should have a solid understanding of Java 11 and the Spring Boot framework. Experience with Kubernetes and Kafka is also required. Familiarity with Azure Cloud services is essential. Your role will involve collaborating with the development team to build scalable and efficient IoT solutions using Java and related technologies.
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
About Merchandise Operation (Merch Ops): Merchandise Operations (Merch Ops) is a merchandise management system, it is positioned as a host system in the retail solutions, it has ability to maintain the Master/Foundation data, create and manage Purchase Orders, create, and manage Prices & Promotions, perform Replenishment, effective inventory control and financial management. Merc Ops provides Business users with consistent, accurate, and timely data across an enterprise by allowing them to get the:
Right Goods in the...
• Right Silhouettes, Sizes and Colors; at the...
• Right Price; at the...
• Right Location; for the...
• Right Consumer; at the...
• Right Time; at the...
• Right Quantity.
About Team:
• Proven, passionate bunch of disruptors providing solutions that solve real-time supply chain problems.
• Well mixed experienced team with young members and experienced in product, domain, and Industry knowledge.
• Gained Expertise in designing and deploying massively scalable cloud native SaaS products
• The team currently comprises of associates across the globe and is expected to grow rapidly.
Our current technical environment:
• Software: React JS, Node JS, Oracle PL/SQL, GIT, Rest API. Java script.
• Application Architecture: Scalable three tier web application.
• Cloud Architecture: Private cloud, MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD) • Frameworks/Others: Tomcat Apache, RDBMS, Jenkins, Nginx, Oracle Type ORM, Express.
What you'll be doing:
• As a Staff Engineer you will be responsible for the design of the features in the product roadmap
• Creating and encouraging good software development practices engineering-wide, driving strategic technical improvements, and mentoring other engineers.
• You will write code as we expect our technical leadership to be in the trenches alongside junior engineers, understanding root causes and leading by example
• You will mentor engineers
• You will own relationships with other engineering teams and collaborate with other functions within Blue Yonder
• Drive architecture and designs to become simpler, more robust, and more efficient.
• Lead designs discussion and come up with robust and more efficient designs to achieve features in product roadmap
• Take complete responsibility of the features developed right from coding till deployments
• Introduce new technology and tools for the betterment of the product
• Guides fellow engineers to look beyond the surface and fix the root causes rather than symptoms.
What we are looking for:
• Bachelor’s degree (B.E/B.Tech/M.Tech Computer science or related specialization) and minimum 7 to 10 years of experience in Software development, has been an Architect, within the last 1-2 years minimum. • Strong programming experience and background in Node JS and React JS.
• Hands-on development skills along with architecture/design experience.
• Hands-on experience on designing, building deploying and maintenance of enterprise cloud solutions.
• Demonstrable experience, thorough knowledge, and interests in Cloud native architecture, Distributed micro-services, Multi-tenant SaaS solution and Cloud Scalability, performance, and High availability
• Experience with API management platforms & providing / consuming RESTful APIs
• Experience with varied tools such as REST, Hibernate, RDBMS, Docker, Kubernetes, Kafka, React.
• Hands-on development experience on Oracle PL/SQL.
• Experience with DevOps and infrastructure automation.
- 2.5+ year of experience in Development in JAVA technology.
- Strong Java Basics
- SpringBoot or Spring MVC
- Hands on experience on Relationl Databases (SQL query or Hibernate) + Mongo (JSON parsing)
- Proficient in REST API development
- Messaging Queue (RabitMQ or Kafka)
- Microservices
- Any Caching Mechanism
- Good at problem solving
Good to Have Skills:
- 3+ years of experience in using Java/J2EE tech stacks
- Good understanding of data structures and algorithms.
- Excellent analytical and problem solving skills.
- Ability to work in a fast paced internet start-up environment.
- Experience in technical mentorship/coaching is highly desirable.
- Understanding of AI/ML algorithms is a plus.
· Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product.
· BlueYonder is seeking a Senior/Principal Architect in the Data Services department (under Luminate Platform ) to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services.
· This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India.
· Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment
· The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools
Our current technical environment:
· Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake
· • Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture
· • Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD)
· Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite
Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA
Responsibilities:
- Parse data using Python, create dashboards in Tableau.
- Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
- Migrate Datastage jobs to Snowflake, optimize performance.
- Work with HDFS, Hive, Kafka, and basic Spark.
- Develop Python scripts for data parsing, quality checks, and visualization.
- Conduct unit testing and web application testing.
- Implement Apache Airflow and handle production migration.
- Apply data warehousing techniques for data cleansing and dimension modeling.
Requirements:
- 4+ years of experience as a Platform Engineer.
- Strong Python skills, knowledge of Tableau.
- Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
- Proficient in Unix Shell Scripting and SQL.
- Familiarity with ETL tools like DataStage and DMExpress.
- Understanding of Apache Airflow.
- Strong problem-solving and communication skills.
Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
Have you streamed a program on Disney+, watched your favorite binge-worthy series on Peacock or cheered your favorite team on during the World Cup from one of the 20 top streaming platforms around the globe? If the answer is yes, you’ve already benefitted from Conviva technology, helping the world’s leading streaming publishers deliver exceptional streaming experiences and grow their businesses.
Conviva is the only global streaming analytics platform for big data that collects, standardizes, and puts trillions of cross-screen, streaming data points in context, in real time. The Conviva platform provides comprehensive, continuous, census-level measurement through real-time, server side sessionization at unprecedented scale. If this sounds important, it is! We measure a global footprint of more than 500 million unique viewers in 180 countries watching 220 billion streams per year across 3 billion applications streaming on devices. With Conviva, customers get a unique level of actionability and scale from continuous streaming measurement insights and benchmarking across every stream, every screen, every second.
What you get to do in this role:
Work on extremely high scale RUST web services or backend systems.
Design and develop solutions for highly scalable web and backend systems.
Proactively identify and solve performance issues.
Maintain a high bar on code quality and unit testing.
What you bring to the role:
5+ years of hands-on software development experience.
At least 2+ years of RUST development experience.
Knowledge of cargo packages for kafka, redis etc.
Strong CS fundamentals, including system design, data structures and algorithms.
Expertise in backend and web services development.
Good analytical and troubleshooting skills.
What will help you stand out:
Experience working with large scale web services and applications.
Exposure to Golang, Scala or Java
Exposure to Big data systems like Kafka, Spark, Hadoop etc.
Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their business ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include: DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision and Warner Bros Discovery.
Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets!
As Conviva is expanding, we are building products providing deep insights into end-user experience for our customers.
Platform and TLB Team
The vision for the TLB team is to build data processing software that works on terabytes of streaming data in real-time. Engineer the next-gen Spark-like system for in-memory computation of large time-series datasets – both Spark-like backend infra and library-based programming model. Build a horizontally and vertically scalable system that analyses trillions of events per day within sub-second latencies. Utilize the latest and greatest big data technologies to build solutions for use cases across multiple verticals. Lead technology innovation and advancement that will have a big business impact for years to come. Be part of a worldwide team building software using the latest technologies and the best of software development tools and processes.
What You’ll Do
This is an individual contributor position. Expectations will be on the below lines:
- Design, build and maintain the stream processing, and time-series analysis system which is at the heart of Conviva’s products
- Responsible for the architecture of the Conviva platform
- Build features, enhancements, new services, and bug fixing in Scala and Java on a Jenkins-based pipeline to be deployed as Docker containers on Kubernetes
- Own the entire lifecycle of your microservice including early specs, design, technology choice, development, unit-testing, integration-testing, documentation, deployment, troubleshooting, enhancements, etc.
- Lead a team to develop a feature or parts of a product
- Adhere to the Agile model of software development to plan, estimate, and ship per business priority
What you need to succeed
- 5+ years of work experience in software development of data processing products.
- Engineering degree in software or equivalent from a premier institute.
- Excellent knowledge of fundamentals of Computer Science like algorithms and data structures. Hands-on with functional programming and know-how of its concepts
- Excellent programming and debugging skills on the JVM. Proficient in writing code in Scala/Java/Rust/Haskell/Erlang that is reliable, maintainable, secure, and performant
- Experience with big data technologies like Spark, Flink, Kafka, Druid, HDFS, etc.
- Deep understanding of distributed systems concepts and scalability challenges including multi-threading, concurrency, sharding, partitioning, etc.
- Experience/knowledge of Akka/Lagom framework and/or stream processing technologies like RxJava or Project Reactor will be a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
- Excellent communication skills. Willingness to work under pressure. Hunger to learn and succeed. Comfortable with ambiguity. Comfortable with complexity
Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their business ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include: DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision and Warner Bros Discovery.
Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets!