
➢ Design and develop core server side components for our consumer facing site
as well as for fulfillment and inventory logistics
➢ Prototype new ideas
Experience / Skills Required:
➢ B.Tech or advanced degree in Computer Science with 8-12 years of progressive industry
experience
➢ Strong experience with Python development and Django framework or NodeJS required
➢ Strong experience with JavaScript , HTML, CSS, AJAX and JSON
➢ Strong experience with building reusable server components
➢ Strong experience with WebServices, XML, REST interfaces
➢ Strong experience with object oriented programming
➢ Strong experience with backend performance optimization and algorithms
➢ Strong experience with SQL, RDBMS, data modeling and MySQL, Oracle or other RDBMS
➢ Strong leadership / technology mentoring skills a must. Ability to do code and design reviews is a
key part of this role.
➢ Experience with Microservices / Distributed event based architectures desired.
➢ Deep experience with Linux based development and deployment architectures.
➢ Experience with database query tuning.
➢ Experience with cloud platforms such as AWS is a great plus
➢ Experience with frameworks such as Angular / React JS is a great plus
➢ Awareness of and experience with reactive programming is a plus
➢ Experience with Ecommerce/Consumer facing applications serving a large volume of users is a
great plus.
➢ Exposure to technologies like Kafka, Redis, Spark, Kinesis, Solr, ElasticSearch etc is a plus.
➢ Experience with NoSQL or BigData is a great plus
➢ Hands on Full Stack experience desired.
➢ Must be adept at experimenting with new technologies
➢ Must have excellent communication (verbal & written), interpersonal, leadership, and problem
solving skills
➢ Must be able to work independently and enjoy working at a fast paced start-up environmen

About BigBasket
About
Connect with the team
Similar jobs
We're seeking an AI/ML Engineer to join our team-
As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
- Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
- AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
- Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
- Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
- Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
- Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
- Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
- Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
- Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference
Requirements
- Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
- Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
- Proficiency in programming languages commonly used for AI/ML. Preferably Python
- Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
- Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
- Strong understanding of machine learning algorithms, statistics, and data structures
- Experience with data preprocessing, data wrangling, and feature engineering
- Knowledge of deep learning architectures, neural networks, and transfer learning
- Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
- Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
- Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
- Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.
What are we looking for?
We are looking for hands-on coders who love what they do, have high attention to detail and are looking for challenging opportunities which involve building products from scratch. Someone who is proactive, and always keen to learn.
What will you be doing?
On a daily basis, some of your work will involve but is not limited to:
- Write clean, secure, and well-tested code
- Build tools and integrate systems to scale the effectiveness of the product
Work Culture at Merge:
Commitment to excellence - In every output, we produce as individuals and as a company, we have to strive for world-class quality. We’re making a change in the world. It will push us out of our comfort zones. We operate in a rapidly changing market and strive to deliver high-quality products faster than anyone else.
We get it done - We take ownership of what we do. Working here is about really, truly owning everything you do. There’s no such thing as “Not my job.” If you see a problem that needs solving, you can – and should – be the one to solve it
Requirements
Skills That Will Help You Excel At Merge
- You have 2 to 5 years of experience building highly reliable and scalable backend systems
- Experience in two or more languages. Go, Node.js, Python, or Java would be ideal.
- You have experience in high-throughput distributed systems and microservices
- Experience with AWS and CI/CD workflow
- Driven, and passionate about building products
- You take ownership of your code
- You have good communication skills in English
- You are familiar with security best practices
- Bachelor’s degree in computer science, engineering, or a related field.
- At-least more than 1+ Years of Experience in a similar role would be desired.
- Excellent technical, diagnostic, and troubleshooting skills.
- Strong leadership skills to drive good coding and design practices across multiple engineering teams
- Willingness to build professional relationships with staff and clients.
- Excellent communication, motivational, and interpersonal skills.
- Expertise in architecting, building, and maintaining ultra-low latency, cost-efficient systems in cloud environments
- Excellent track record in modernizing cloud-based applications using micro services, containers, and other architectures
- Experience and working knowledge in building large-scale, data-intensive Text search applications using Solr / Elastic Search is a Must
- Experience and working knowledge of AI/ML/ML Ops with respect to building large-scale, data-intensive applications would be a Plus
- Experience and working knowledge in building Java based Spring Boot micro-services integrated with any messaging framework
- Driven, Highly-motivated and passionate towards development & innovation
- Determining project requirements and developing work schedules for the team.
- Delegating tasks and achieving daily, weekly, and monthly goals.
- Liaising with team members, management, and clients to ensure projects are completed to standard.
- Identifying risks and forming contingency plans as soon as possible.
- Analyzing existing operations and scheduling training sessions and meetings to discuss improvements.
- Keeping up-to-date with industry trends and developments.
- Updating work schedules and performing troubleshooting as required.
- Motivating staff and creating a space where they can ask questions and voice their concerns.
- Being transparent with the team about challenges, failures, and successes.
- Writing progress reports and delivering presentations to the relevant stakeholders.
- Extremely hands-on in delivering Development and R&D tasks
- Design, plan and perform dev-analysis to determine effort estimates on every sprint for the team
- Running Demo's, ensuring thorough documentation of the features built
- Identify & Plan upgrades to technologies & frameworks from time-to-time
- Complete experience in the Spring Framework
- proficient knowledge of SQL and NoSQL must
- Hands-on experience in designing and developing applications using Java EE platforms
- Designing and developing high-volume, low-latency applications for mission-critical systems and delivering high availability and performance
- Contributing in all phases of the development lifecycle
- Writing well designed, testable, efficient code
- Excellent Communication Skills
- Willingness to own a responsibility
- Ability to work in a team as well as an individual
- Ability to work under pressure and maintain deadlines
- Good to have worked on end to end in projects
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team










