
Job Description
Role: Java Developer
Location: PAN India
Experience:4+ Years
Required Skills -
- 3+ years Java development experience
- Spring Boot framework expertise (MANDATORY)
- Microservices architecture design & implementation (MANDATORY)
- Hibernate/JPA for database operations (MANDATORY)
- RESTful API development (MANDATORY)
- Database design and optimization (MANDATORY)
- Container technologies (Docker/Kubernetes)
- Cloud platforms experience (AWS/Azure)
- CI/CD pipeline implementation
- Code review and quality assurance
- Problem-solving and debugging skills
- Agile/Scrum methodology
- Version control systems (Git)

Similar jobs
Job Description
- 5- 8 yrs IT industry experience preferably Banking domain
- Strong Python Skills and good understanding of Java and Microservices
- Should have handled banking customers and exposure to Production support processes
- Good in database and Pl/SQL – Ability to write SQL as and when needed
- Good attitude and communication
- WFH not allowed – Working from Bank premises as per bank calendar in Saifabad or Hitech City – Saifabad Flexibility must
.Net Core Developer
Experience 2+ Yrs in .Net Core/C# Responsibilities Responsible for designing and developing REST APIs using .Net Core Framework and C#. Create high-performance REST APIs for financial applications. Qualifications B.Tech/BE or MCA – May be relaxed in the case of experienced candidates.
Experience At least 2+ Years of experience as .Net Core Developer Proficient in .Net Core and C# Excellent knowledge in developing REST APIs and Entity Framework Knowledge and experience in writing SQL, SPs, and Triggers Sound knowledge in MVC frameworks and databases. Good project management skills. Good communications skills A good team player
Salary 2 -8 LPA
• Develop key functionality and core capabilities for using
Java/J2EE stack
• Design and develop RDandX Network’s microservices and ensure bug free code is pushed to
the deployment pipeline to support large volume of transactions
• Define and communicate the technical design requirements to the Network’s stakeholders
and the Engineering lead
• Responsible for building restful services to integrate with third party services like AdWords
and Facebook marketing API
• Responsible for designing the technical architecture of the different services and
maintaining and upgrading it
• Designing the unit test cases and building the framework for the development team to
enforce the unit testing in all the services
• Be involved and participate in the end to end products’ lifecycle management
• Learn about new technologies and Stay up to date with best practices
• Collaborate with multidisciplinary team of designers, engineers, system administrators and
product team
• Lead the Backend team and manage their day to day activities and work deliverables
Should have:
i. 4-7 years of Working Experience
ii. Experience of Java, J2EE, Spring boot, PostgreSQL
iii. Should have working knowledge of Jenkins, Git, Maven, Glassfish or Tomcat server
iv. Experience on working with cloud technologies like AWS (Kinesis, Lamda, SQS) or GCP (Pub/Sub,
DataProc, DataFlow) is desirable
v. Have experience working with large scale data real time processing systems
vi. Should have working experience on building event driven micro services APIs
vii. Elastic search experience and K8 experience is good to have
- Set up, manage, automate and deploy AI models in development and production infrastructure.
- Orchestrate life cycle management of AI Models
- Create APIs and help business customers put results of the AI models into operations
- Develop MVP ML learning models and prototype applications applying known AI models and verify the problem/solution fit
- Validate the AI Models
- Make model performant (time and space) based on the business needs
- Perform statistical analysis and fine-tuning using test results
- Train and retrain systems when necessary
- Extend existing ML libraries and frameworks
- Processing, cleansing, and verifying the integrity of data used for analysis
- Ensuring that algorithms generate accurate user recommendations/insights/outputs
- Keep abreast with the latest AI tools relevant to our business domain
- Bachelor’s or master's degree in Computer Science, Statistics or related field
- A Master’s degree in data analytics, or similar will be advantageous.
- 3 - 5 years of relevant experience in deploying AI models to production
- Understanding of data structures, data modeling, and software architecture
- Good knowledge of math, probability, statistics, and algorithms
- Ability to write robust code in Python/ R
- Proficiency in using query languages, such as SQL
- Familiarity with machine learning frameworks such as PyTorch, Tensorflow and libraries such as scikit-learn
- Worked with well know machine learning models ( SVM, clustering techniques, forecasting models, Random Forest, etc.)
- Having knowledge in CI/CD for the building and hosting the solutions
- We don’t expect you to be an expert or an AI researcher, but you must be able to take existing models and best practices and adapt them to our environment.
- Adherence to compliance procedures in accordance with regulatory standards, requirements, and policies.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with an innate ability to engage with data architects, analysts, and scientists.
Your Day-to-Day Tasks Include:
Works in requirements like Engineering, Design, Development, and Deployment. All the tasks involves working with Java, SQL Server and Couchbase.
Build and monitor data pipelines that serve 100+ websites, 150M+ unique impressions daily. Write code that can handle 4x more scale than the given requirement.
Maintain uptime of multiple distributed web applications.
Build data pipelines to pull data from upstream partners like Google.
You Should Have:
Minimum 3 years of experience with Java.
Minimum 2 year of experience with any SQL database (MySql, MSSql, Oracle, DB2, Sybase). Minimum 3 years of experience with web development.
Experience with any NoSql database (MongoDb, Couchbase, CouchDb, DynamoDb). Experience of designing/implementing/maintaining scalable systems.
Experience with any cloud platform (AWS/Azure/GCP).
Good To Have:
Experience with BI and data reporting.
Experience with Elastic search.
Understanding of data warehousing.
Experience in Node.js.
Locus is a global decision- making platform in the supply chain that uses deep learning and proprietary algorithms to provide route optimization, real-time tracking, insights and analytics, beat optimization, efficient warehouse management, vehicle allocation and utilization, intuitive 3D packing and measurement of packages. Locus automates human decisions required to transport a package or a person, between any two points on earth, delivering gains along efficiency, consistency, and transparency in operations.
Locus, which has achieved a peak of 1 million orders processed in a day (200,000 orders an hour) and is trained & tested on over 100 million+ order deliveries, works in 75 cities across the globe. Locus works with several large-scale market leaders like Urban Ladder, Tata Group of Companies, Droplet, Licious, Rollick, Lenskart, other global FMCG, pharma, e-commerce, 3PL and logistics conglomerates.
Locus is backed by some of the biggest names in the market and recently raised $22 MN in Series B funding and also $4 Mn in a pre-Series B round. Earlier, In 2016, Locus raised $2.75 Mn (INR 18.3 Cr) in a Series A funding.
Locus was started by Nishith Rastogi and Geet Garg, two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Nishith was profiled by Forbes Asia in their ’30 Under 30’ 2018 list. Geet, on the other hand, holds a dual degree (BTech and MTech) in Computer Science and Engineering from the Indian Institute of Technology. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience.
Job Description
- Design & implement backend APIs at Locus.sh
- Mentor junior developers technically.
- Actively work to reduce tech debt in the Locus backend
- Work towards more stability & scalability of the backend
- Tech stack - Java, AWS, Aurora etc.
Eligibility
- 4-8 years of product company experience
- OOP implementation experience. Programming language does not matter, We use Java internally but have hired folks from non Java background.
- Hands on experience in SQL, Dynamo DB, Postgres etc preferred.
- Prior experience building REST APIs
- Advanced understanding of AWS stack
- Prior knowledge of solving problems at scale.
Perks:
- Healthy catered meals at office
- You decide your own Work From Home (WFH) and Out Of Office (OOO)
- Pet-friendly - bring your pets to the office any day. Locus family already has a Rottweiler and a Beagle
- Own development, design, scaling and maintenance of application and messaging engines that power the central platform of Capillary's Cloud CRM product.
- Contribute to overall design and roadmap.
- Mentor Junior team members.
Required Skills:
- Innovative and self-motivated with passion to develop complex and scalable applications.
- 3-5 years of experience in software development with strong focus on algorithms and data structures.
- Strong coding and design skills with prior experience in developing scalable & high availability applications using Core Java/J2EE, Spring, Hibernate.
- Work experience with Relational databases is required (Primarily MySQL)
- Prior work experience with Non-Relational databases (primarily Redis, MongoDb) is an added plus.
- Strong Analytical and Problem Solving Skills.
- BTech









