
About Tee Technology Pvt. Ltd.
About
Connect with the team
Similar jobs
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Job Summary:
As a Shopify App Developer at [Your Company Name], you will be responsible for designing, developing, and maintaining custom applications for the Shopify platform. You will collaborate with cross-functional teams to create solutions that meet our clients' needs and improve their eCommerce operations. The ideal candidate will have a strong background in Shopify app development, excellent problem-solving skills, and a passion for delivering exceptional user experiences.
Key Responsibilities:
- App Development: Design, develop, and deploy custom Shopify apps using Shopify’s API, Polaris, and other relevant technologies.
- Customization: Customize and extend Shopify’s existing functionalities to meet specific client requirements.
- Integration: Integrate Shopify apps with third-party services, APIs, and data sources as needed.
- Maintenance & Support: Troubleshoot and resolve issues related to Shopify apps and provide ongoing support and maintenance.
- Collaboration: Work closely with project managers, designers, and other developers to ensure that app solutions align with project goals and client expectations.
- Testing & Quality Assurance: Conduct thorough testing of applications to ensure reliability, performance, and adherence to Shopify’s standards.
- Documentation: Create and maintain clear documentation for code, processes, and app functionality.
- Innovation: Stay updated with the latest Shopify developments, industry trends, and best practices to continuously improve app functionality and user experience.
Qualifications:
- Experience: Proven experience in developing Shopify apps, including a portfolio of past projects or apps.
- Technical Skills: Proficiency in Shopify’s APIs, Liquid templating language, JavaScript, HTML, CSS, and familiarity with Shopify Polaris design system.
- Programming Languages: Strong knowledge of backend programming languages such as Ruby, PHP, or Node.js.
- Database Management: Experience with database technologies such as MySQL, MongoDB, or PostgreSQL.
- Problem-Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve complex technical issues.
- Communication: Strong verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders.
- Team Player: Ability to work effectively both independently and as part of a team in a fast-paced environment.
Must have - Strong python

-
To take lead and manage the backend team.
-
Taking ownership and developing new micro services/systems to improve the products.
-
Managing and adding on to our linux based content management system as well as our web servers.
-
4+ years of hands-on experience working with serverless architecture.
-
Hands-on experience with products at a scale of > 50K DAUs.
-
Should be able to architect and implement complex services/systems in a simple and scalable manner.
-
Self starter with the ability to lead a team.
-
Should be an expert in: Python. AWS Lambda, RDS and EC2 SQL databases, specifically postgresql. NOSQL Database, specifically dynamoDB.
-
Should be able to write automated test cases for developed code.
-
Should be proficient in the linux environment.
-
Having the following skills will an added bonus: Experience developing with Neo4j databases. Firebase services. * Experience working with Docker and ECS.
-
Your word counts You’ll get to play a key role in shaping up the product roadmap and will be involved in every stage
-
Learning never stops With advancing into the growth stage, there is immense potential and relevance to apply new developments you learn in your domain
-
Sponsored training Do you want to learn something that helps improve your productivity or knowledge? We’ll sponsor that
-
Remote friendly Even before Covid, half of us worked from home. Heck, you want to work from a village? Go for it.
-
Healthy company culture We nurture a conducive environment for your personal and professional growth, and take extreme care to make sure you are happy at work
-
Everyone gets to lead You own your idea, and you lead it’s execution
-
Smart work is what matters with us We don’t count hours. We value getting the work done
-
Teammates who will sing and jam with you
Talent Acquisition, Competency Mapping,
HR Operations, Compensation and Benefits, Succession Planning
4 to 8 years of experience.
Certification A2 or any
Azure Component Provisioning.
Hands-on experience on configuring and troubleshooting application gateway firewall, load balancers, VM/Disk encryption, Azure vaults, network security group, web application firewalls.
Hands-on experience and troubleshooting on Azure Virtual Network Gateway, ExpressRoute,Azure chatbot services.
Hands-on experience on Azure Automation with hands skills in PowerShell.
Extensive knowledge of Windows Server operating systems and Active Directory.
Extensive knowledge of Linux operating systems (Centos and Ubuntu).
Installing, Configuring and Managing AD / DHCP / DNS Services.
Creating Group Policies and implementation as per standard procedures.
Install, secure, maintain, troubleshoot and upgrade Windows/Linux Server operating systems.
Knowledge of network architecture and design, network security, Web sites/Portal security and client/server security.
Experience with configuration management single sign on with azure active directory.
Server performance monitoring and apply the solution to increase the performance if needed.
Cost management and cost analysis.
Backup and disaster recovery administration.
SSL/App services certificates configuration with Linux and windows-based websites/portals.
- Web-based portal development for end to end process management
- Work with the real-time feedback from users to make the product better
- Discover, design, develop, deploy, debug. Repeat! In highly agile environment.
Position Vacant: 2
Job Location: Ahmedabad
Experience: 1 to 3 years
Qualification:Graduate (Preferable BE/ BTech/ ME/ MTech/ MCA/ BCA/ MSc)
Requirement :
- Candidates can apply who worked on any technologies such as Angular, Node.js,
MongoDB & React.js
Responsibilities and Duties :
- Proficiency in server-side programming with Node.js and working with NoSQL databases.
- Experience in building and consuming REST API.
- Worked as Full stack developer with NodeJS, React or Angular JS, NoSQL DB
- Enthusiasm in writing scalable code.
- Good knowledge of Javascript, JSON, GIT
- Sound knowledge of data structures, algorithms, and system design
- Unit Testing experience
- Knowledge of caching levels and memory DBs
- Develop functional and sustainable web or mobile applications with clean codes
- Troubleshoot and debug applications.
About Graphene
Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best
Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.
Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.
Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.
Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.
Job title: - Data Analyst
Job Description
Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines.
Responsibilities and Duties
- Managing end to end data pipeline from data source to visualization layer
- Ensure data integrity; Ability to pre-empt data errors
- Organized managing and storage of data
- Provide quality assurance of data, working with quality assurance analysts if necessary.
- Commissioning and decommissioning of data sets.
- Processing confidential data and information according to guidelines.
- Helping develop reports and analysis.
- Troubleshooting the reporting database environment and reports.
- Managing and designing the reporting environment, including data sources, security, and metadata.
- Supporting the data warehouse in identifying and revising reporting requirements.
- Supporting initiatives for data integrity and normalization.
- Evaluating changes and updates to source production systems.
- Training end-users on new reports and dashboards.
- Initiate data gathering based on data requirements
- Analyse the raw data to check if the requirement is satisfied
Qualifications and Skills
- Technologies required: Python, SQL/ No-SQL database(CosmosDB)
- Experience required 2 – 5 Years. Experience in Data Analysis using Python
• Understanding of software development life cycle
- Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI)
- Automation steps needed to transform and enrich data.
- Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.
- Excellent verbal and written communication skills
- Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model
- Past experience in NodeJs, Redux and other advanced JavaScript libraries and frameworks.
- Hands-on experience in NodeJS, Javascript, ECMAScript (OOJS) and JSX.
- Knowledge of SPAs and build tools like Npm, Webpack, Grunt, Bower etc
- Should have good knowledge of Mobile friendly web application development
- Proficient understanding of web markup, including HTML5, CSS3, Bootstrap, Flexbox.
- Should have the knowledge of Good understanding of asynchronous request handling, partial page updates, and AJAX framework.
- Familiarity with RESTful APIs
- Hands-on experience in server-side CSS pre-processing platforms, such as SASS or LESS
- Proficient understanding of cross-browser compatibility issues and ways to work around them.
- Proficient understanding of code versioning tools, such as Git, Github
Responsibilities :
- Perform all aspects of the software development such as requirements and specifications, design and development, coding and debugging.
- Should be able to come up with strategies to speed up the iterative process of software development.
- Analyse and enhance efficiency, stability, and scalability of system resources







