

Must have:
- 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
- Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
- Good understanding of Generics, OOPs concepts & Design Patterns
- Solid engineering and coding skills. Ability to write high-performance production quality code in Python
- Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
- Strong expertise in time series forecasting using stat, ML, DL and foundation models
- Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
- Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
- Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
- Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
- Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
- Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders
Good To Have:
- Experience with MLOps tools and practices for continuous integration and deployment of ML models.
- Has familiarity with deploying applications on Kubernetes
- Knowledge of supply chain management principles and challenges.
- A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred

About Noodle.ai
About
Connect with the team
Similar jobs


- 5+ years of experience
- FlaskAPI, RestAPI development experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies


· Develop and maintain scalable back-end applications using Python frameworks such as Flask/Django/FastAPI.
· Design, build, and optimize data pipelines for ETL processes using tools like PySpark, Airflow, and other similar technologies.
· Work with relational and NoSQL databases to manage and process large datasets efficiently.
Collaborate with data scientists to clean, transform, and prepare data for analytics and machine learning models.
Work in a dynamic environment, at the intersection of software development and data engineering.

- 2+ years of experience in Python, Django, Mongo, Express, MySQL, etc
- Have built applications capable of serving high volume with low latency in production
- Following practices of agile development with continuous integration/deployment
- Experience in deploying applications in AWS cloud
- Knowledge in building applications for Fintech/payments domain is a bonus
- Bachelor’s or Master’s degree in CS or equivalent from a reputed institution
- Self-starter, hustler, and a desire to achieve greatness is a MUST
- Immediate joining preferred
- You will have the flexibility to design the application and systems from the ground up.
- Your primary responsibility is to implement business logic in the backend and create awesome restful APIs for seamless integration with our mobile front end.

Full Time position
Work Location:Hyderabad
Experience level: 3 to 5 years
Mandatory Skills:Python, Django/Flask and Rest API
Package:Upto 20 LPA
Job Description:
--Experience in web application development using Python, Django/Flask.
--Proficient in developing REST API's, Integration and familiar with JSON formatted data.
--Good to have knowledge in front-end frameworks like Vue.js/Angular/React.js
--Writing high quality code with best practices based on technical requirement.
--Hands-on experience in analysis, design, coding, and implementation of complex, custom-built software products.
--Should have experience in Database, preferably Redis.
--Experience in working with Git or equivalent code management / version control system with best practices.
--Good to have knowledge in Elasticsearch, AWS, Docker.
--Should have interest to explore and work on Cyber Security domain.
--Experience with Agile development methods.
--Should have strong analytical and logical skills.
--Should be good at fundamentals: Data Structures, Algorithms, Programming Languages, Distributed Systems, and Information retrieval.
--Should have good communication skills and client facing experience.


SpringML is looking to hire a top-notch Data Engineer who is passionate about working with data and using the latest distributed framework to process large datasets. As a Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.
Title-Data Engineers
Location-Hyderabad
Work Timings-10.00 AM-06.00 PM
RESPONSIBILITIES:
Ability to work as a member of a team assigned to design and implement data integration solutions.
Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
Propose design solutions and recommend best practices for large scale data analysis
SKILLS:
B.tech degree in computer science, mathematics or other relevant fields.
Good Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.

We are looking to hire a Full-time, Remote (India) Backend engineer with a flair for writing full stack code to help create a SAAS product from the ground up in a niche industry.
This will be the v2.0 of an existing legacy platform with paying customers. This position is within the first few hires in the founding team of the revamped company.
We are on a mission to build a truly world class product and deliver at high speeds with high quality.
Read on and apply for the job if you are a doer, likes no BS, think you're competent enough to get things done, wants some quiet time every day, wants to make a difference, values work life balance and enjoy life outside work.
Experience -
At least 3 to 4 years building scalable SAAS applications in the cloud. It would be great to showcase anything progressive. Education from premier institutions, git contributions, tech blogs, volunteering, pet projects, etc.
Responsibilities -
- Architect, write, deploy and own the (SAAS) product from a technical standpoint working closely with other devs on the team
- Create and deliver the product end-to-end all the way to the user
- Write highly performant code and continuously measure performance and make the application better based on benchmarks
- Debug production issues and keep the application quality high always
- Assist with hiring more people when the team grows
- Pick up any appropriate work as in any early-stage company
General Skills Needed -
- Should be able to build and deploy multi-tenant SAAS applications end to end
- Need to have a technology-agnostic mindset. Should be able to easily pick up any new technology based on the use case.
- Should be an expert software craftsman/craftswoman. Must own the code and be proud of what you build.
- Write well thought out, well-tested code with extensive unit tests and integration tests so that the addition of future features is easy.
- Must know cloud deployment concepts and be comfortable creating and maintaining cost-effective cloud deployment strategies. AWS experience highly preferred.
- Must be able to design and architect simple, market-tested yet highly scalable solutions
- Must be able to quickly iterate and produce working software at a high speed. Must not be shy to scrap and rebuild when there is a need
- Must be appreciative of documentation and write well-documented code and technical documentation alongside
Technical Skills Needed -
- Expert level knowledge in Python and Django. We are a Python/Django/Javascript/ReactJs/AWS shop.
- Must have good knowledge/experience in creating cost-efficient and scalable cloud deployments (AWS preferred)
- Must have good knowledge of industry-standard design patterns and tools
- Must have a good understanding of various frameworks on authentication and authorization, billing and metrics
- Some experience in data analytics and reporting - creating reports based on the data collected and deliver to the frontend
- Expertise with Frameworks
- Django, or similar backend frameworks
- PostgreSQL, MySQL, any NoSQL or other database technology
Job Perks
Perks -
- Opportunity to build a SAAS product from the ground up. Lots of challenges to tackle on
- Work remotely from anywhere within India
- Emphasis on work-life balance and professional development
- Flexible work hours and a lot of autonomy
- Work with a very informal team collaborating over slack. No meaningless meetings. Focus on getting things done and not hours put in
- Opportunity to grow the team with the company
- Market salary and a yearly bonus outside the salary based on company + individual performance
- Generous PTO plan

Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team







