11+ Fulfillment Jobs in Hyderabad | Fulfillment Job openings in Hyderabad
Apply to 11+ Fulfillment Jobs in Hyderabad on CutShort.io. Explore the latest Fulfillment Job opportunities across top companies like Google, Amazon & Adobe.
Job Summary
We are looking for a Technical Lead with strong expertise in AWS, Node.js, and MongoDB to drive the design, development, and deployment of scalable applications. The ideal candidate will be responsible for leading a team of developers, defining best practices, and ensuring high-quality code delivery while working in an agile environment.
Key Responsibilities
- Lead the end-to-end development of web applications using Node.js, Angular, and MongoDB.
- Architect scalable, secure, and high-performance cloud-based applications on AWS.
- Provide technical guidance, code reviews, and mentorship to the development team.
- Optimize backend APIs and database queries for performance and scalability.
- Ensure best practices in coding, security, and DevOps for cloud deployment.
- Collaborate with cross-functional teams including UX/UI designers, product managers, and DevOps engineers.
- Implement CI/CD pipelines, automated testing, and infrastructure as code on AWS.
- Troubleshoot production issues and drive continuous improvement in system reliability.
- Stay updated with emerging technologies and advocate for relevant tech adoption.
Required Skills & Experience
- 4+ years of hands-on experience in software development.
- Strong proficiency in Node.js (Fastify/Express), Angular (latest versions), and MongoDB (Mongoose, Aggregations, Indexing).
- Cloud expertise with AWS services like EC2, S3, Lambda, API Gateway, ECS .., etc.
- Experience with Microservices Architecture.
- Strong knowledge of RESTful APIs, WebSockets, GraphQL, and authentication mechanisms (OAuth, JWT, SSO).
- Hands-on experience with performance tuning, caching strategies (Redis, CloudFront), and database optimisation.
- Strong problem-solving, leadership, and communication skills.
Good to Have
- Experience with AI/ML integration in applications.
- Knowledge of Serverless Architectures (AWS Lambda, Step Functions).
- Exposure to NoSQL databases like like Elasticsearch.
Why Join Us?
- Work on cutting-edge technologies in a fast-paced environment.
- Opportunity to lead a strong technical team and shape the product architecture.
- Competitive salary and benefits package.
- A culture that values innovation, ownership, and growth.
Job Title: Data Engineer / Integration Engineer
Job Summary:
We are seeking a highly skilled Data Engineer / Integration Engineer to join our team. The ideal candidate will have expertise in Python, workflow orchestration, cloud platforms (GCP/Google BigQuery), big data frameworks (Apache Spark or similar), API integration, and Oracle EBS. The role involves designing, developing, and maintaining scalable data pipelines, integrating various systems, and ensuring data quality and consistency across platforms. Knowledge of Ascend.io is a plus.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines and workflows.
- Develop and optimize ETL/ELT processes using Python and workflow automation tools.
- Implement and manage data integration between various systems, including APIs and Oracle EBS.
- Work with Google Cloud Platform (GCP) or Google BigQuery (GBQ) for data storage, processing, and analytics.
- Utilize Apache Spark or similar big data frameworks for efficient data processing.
- Develop robust API integrations for seamless data exchange between applications.
- Ensure data accuracy, consistency, and security across all systems.
- Monitor and troubleshoot data pipelines, identifying and resolving performance issues.
- Collaborate with data analysts, engineers, and business teams to align data solutions with business goals.
- Document data workflows, processes, and best practices for future reference.
Required Skills & Qualifications:
- Strong proficiency in Python for data engineering and workflow automation.
- Experience with workflow orchestration tools (e.g., Apache Airflow, Prefect, or similar).
- Hands-on experience with Google Cloud Platform (GCP) or Google BigQuery (GBQ).
- Expertise in big data processing frameworks, such as Apache Spark.
- Experience with API integrations (REST, SOAP, GraphQL) and handling structured/unstructured data.
- Strong problem-solving skills and ability to optimize data pipelines for performance.
- Experience working in an agile environment with CI/CD processes.
- Strong communication and collaboration skills.
Preferred Skills & Nice-to-Have:
- Experience with Ascend.io platform for data pipeline automation.
- Knowledge of SQL and NoSQL databases.
- Familiarity with Docker and Kubernetes for containerized workloads.
- Exposure to machine learning workflows is a plus.
Why Join Us?
- Opportunity to work on cutting-edge data engineering projects.
- Collaborative and dynamic work environment.
- Competitive compensation and benefits.
- Professional growth opportunities with exposure to the latest technologies.
How to Apply:
Interested candidates can apply by sending their resume to [your email/contact].
Job Description
We are seeking a talented and experienced Java SpringBoot Microservices Developer
to join our dynamic development team. As a Java SpringBoot Microservices Developer,
you will be responsible for designing, developing, and maintaining scalable and
high-performance microservices-based applications using Java and SpringBoot
frameworks.
Responsibilities:
● Collaborate with cross-functional teams to gather and analyze requirements for
the development of microservices applications.
● Design, develop, and implement robust and scalable microservices using Java
and SpringBoot.
● Build RESTful APIs and integrate them with external systems as required.
● Ensure the performance, security, and reliability of the microservices through
thorough testing and debugging.
● Participate in code reviews to ensure code quality, maintainability, and adherence
to coding standards.
● Troubleshoot and resolve technical issues related to microservices and their
integration with other components
● Continuously research and evaluate emerging technologies and industry trends
related to microservices and recommend improvements to enhance application
development.
Requirements:
● Bachelor's degree in Computer Science, Software Engineering, or a related field.
● Strong experience in Java development, specifically with SpringBoot framework.
● Proficiency in designing and developing microservices architectures and
implementing them using industry best practices.
● Solid understanding of RESTful API design principles and experience in building
and consuming APIs.
● Knowledge of cloud platforms and experience with containerization technologies
(e.g., Docker, Kubernetes) is highly desirable.
● Familiarity with agile development methodologies and tools (e.g., Scrum, JIRA) is
a plus.
● Excellent problem-solving and analytical skills with a keen attention to detail.
● Effective communication and collaboration skills to work effectively within a team
environment.
If you are a passionate Java developer with a strong focus on building scalable
microservices applications using SpringBoot, we would love to hear from you. Join our
team and contribute to the development of cutting-edge solutions that deliver
exceptional user experiences.
To apply, please submit your resume and a cover letter outlining your relevant
experience and achievements in Java SpringBoot microservices development.
Expi with Node.js, Express, Feather JS
3rd party API integration knowledge
Database- MySql or NoSql
Kafka Client Integration with Nodejs
Redis integration using Nodejs
Responsibilities
- Build and mentor the computer vision team at TransPacks
- Drive to productionize algorithms (industrial level) developed through hard-core research
- Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
- Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
- Entrepreneurial and out-of-box thinking essential for a technology startup
- Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability
Eligibility
- Tech in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (courses, projects etc) and 6-8 years of experience
- Tech in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (Thesis work) and 4-7 years of experience
- D in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (Ph. D. Dissertation) and inclination to working in Industry to provide innovative solutions to practical problems
Requirements
- In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers
- Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction
- Ability to understand, optimize and debug imaging algorithms
- Understating and experience in openCV library
- Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.)
- Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data.
- Understanding of engineering principles and a clear understanding of data structures and algorithms
- Experience in writing production level codes using either C++ or Java
- Experience with technologies/libraries such as python pandas, numpy, scipy
- Experience with tensorflow and scikit.
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server




