50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Position: QA Automation Engineer
Experience: 0-3 Years
Location: Faridabad
Role Overview
We are seeking a talented QA Automation Engineer with 0-3 years of experience to join our dynamic team. The ideal candidate will possess strong technical skills in automation tools and frameworks such as Selenium, Cypress, or Playwright, and have a solid understanding of JavaScript, Python, and API testing. You will play a key role in ensuring the quality and reliability of our products.
Key Responsibilities
- Design, develop, and maintain automated test scripts using tools like Selenium, Cypress, Playwright, and Appium.
- Perform functional, regression, and performance testing across web and mobile applications.
- Collaborate with cross-functional teams to understand requirements and define test strategies.
- Execute and manage test cases, ensuring optimal coverage for critical features.
- Conduct API testing to validate endpoints, data flows, and integrations.
- Identify, document, and track bugs using tools such as JIRA or similar.
- Implement best practices in test automation and CI/CD processes.
- Provide regular updates on testing progress and ensure timely delivery of quality releases.
Required Skills and Qualifications
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 0-3 years of experience in QA automation testing.
- Strong knowledge of automation tools like Selenium/Cypress/Playwright.
- Proficiency in JavaScript and/or Python.
- Experience with TestNG or similar frameworks.
- Good understanding of API testing (Postman, REST, SOAP).
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Excellent problem-solving skills and attention to detail.
Who are we?
Eliminate Fraud. Establish Trust.
IDfy is an Integrated Identity Platform offering products and solutions for KYC, KYB, Background Verifications, Risk Mitigation, Digital Onboarding, and Digital Privacy. We establish trust while delivering a frictionless experience for you, your employees, customers, and partners.
Only IDfy combines enterprise-grade technology with business understanding and has the widest breadth of offerings in the industry. With more than 12+ years of experience and 2 million verifications per day, we are pioneers in this industry.
Our clients include HDFC Bank, Induslnd Bank, Zomato, Amazon, PhonePe, Paytm, HUL, and many others. We have successfully raised $27M from Elev8 Venture Partners, KB Investment, and Tenacity Ventures!
About Team
The machine learning team at IDfy is self-contained and responsible for building models and services that support key workflows. Our models serve as gating criteria for these workflows and are expected to perform accurately and quickly. We use a mix of conventional and hand-crafted deep learning models.
The team comes from diverse backgrounds and experiences and works directly with business and product teams to craft solutions for our customers. We function as a platform, not a services company.
We are the match if you...
- Are a mid-career machine learning engineer (or data scientist) with 4-8 years of experience in data science.
Must-Haves
- Experience in framing and solving problems with machine learning or deep learning models.
- Expertise in either computer vision or natural language processing (NLP), with appropriate production system experience.
- Experience with generative AI, including fine-tuning models and utilizing Retrieval-Augmented Generation (RAG).
- Understanding that modeling is just a part of building and delivering AI solutions, with knowledge of what it takes to keep a high-performance system up and running.
- Proficiency in Python and experience with frameworks like PyTorch, TensorFlow, JAX, HuggingFace, Spacy, etc.
- Enthusiasm and drive to learn and apply state-of-the-art research.
- Ability to write APIs.
- Experience with AI systems and at least one cloud provider: AWS Sagemaker, GCP VertexAI, AzureML.
Good to Haves
- Familiarity with languages like Go, Elixir, or an interest in functional programming.
- Knowledge and experience in MLOps and tooling, particularly Docker and Kubernetes.
- Experience with other platforms, frameworks, and tools.
Here’s what your day would look like...
In this role, you will:
- Work on all aspects of a production machine learning system, including acquiring data, training and building models, deploying models, building API services for exposing these models, and maintaining them in production.
- Focus on performance tuning of models.
- Occasionally support and debug production systems.
- Research and apply the latest technology to build new products and enhance the existing platform.
- Build workflows for training and production systems.
- Contribute to documentation.
While the emphasis will be on researching, building, and deploying models into production, you will also contribute to other aspects of the project.
Why Join Us?
- Innovative, Impactful Projects: Work on cutting-edge AI projects that push the boundaries of technology and positively impact billions of people.
- Collaborative Environment: Join a passionate and talented team dedicated to innovation and excellence. Be part of a diverse and inclusive workplace where your ideas and contributions are valued.
- Mentorship Opportunities: Mentor interns and junior team members, with support and coaching to help you develop leadership skills.
Excited already?
At IDfy, you will work on the entire end-to-end solution rather than just a small part of a larger process. Our culture thrives on collaboration, innovation, and impact.
Job Description :
- Proficient in SQL & Data Analysis tools and techniques(SQL & Python etc)
- Banking domain expertise
- Strong understanding of AML (Anti Money Laundering)and fraud regulations
- Excellent data analysis and problem-solving skills
- Knowledge on data details and accuracy.
- Identifies, creates, and analyzes data, information, and reports to make recommendations and enhance organizational capability.
- Experience in using Business Analysis tools and techniques
- Knowledge and understanding of various Business Analysis methodologies
- Knowledge on data extraction, transformation, and mapping.
- Functional knowledge - AML/ Fraud
NP : Immediate joiner to 30 days max
Job Title : AI/ML Engineer
Experience : 3+ Years
Location : Gurgaon (On-site)
Work Mode : 6 Days Work From Office
Summary :
We are seeking an experienced AI/ML Engineer to join our team in Gurgaon. The ideal candidate will have a strong background in machine learning algorithms, AI model development, and deploying ML solutions into production.
Key Responsibilities:
- Design, develop, and deploy AI/ML models and solutions.
- Work on data preprocessing, feature engineering, and model optimization.
- Collaborate with cross-functional teams to integrate AI/ML capabilities into applications.
- Monitor and improve the performance of deployed models.
- Stay updated on the latest AI/ML advancements and tools.
Requirements:
- Proven experience in AI/ML development with tools like TensorFlow, PyTorch, or Scikit-learn.
- Strong programming skills in Python.
- Familiarity with data preprocessing and feature engineering techniques.
- Experience with ML model deployment (e.g., using Docker, Kubernetes).
- Excellent problem-solving and analytical skills.
Why Join Us?
- Competitive salary and growth opportunities.
- Work on cutting-edge AI/ML projects in a collaborative environment.
Client based at Bangalore location.
Ab Initio Developer
About the Role:
We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.
Key Responsibilities:
· Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.
· Analyze complex data requirements and translate them into effective Ab Initio designs.
· Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.
· Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.
· Optimize performance and scalability of Ab Initio jobs.
· Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.
· Stay up-to-date with the latest Ab Initio technologies and industry best practices.
Required Skills and Experience:
· 2.5 to 8 years of hands-on experience in Ab Initio development.
· Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.
· Proficiency in Ab Initio's graphical interface and command-line tools.
· Experience in data modeling, data warehousing, and ETL concepts.
· Strong SQL skills and experience with relational databases.
· Excellent problem-solving and analytical skills.
· Ability to work independently and as part of a team.
· Strong communication and documentation skills.
Preferred Skills:
· Experience with cloud-based data integration platforms.
· Knowledge of data quality and data governance concepts.
· Experience with scripting languages (e.g., Python, Shell scripting).
· Certification in Ab Initio or related technologies.
Role: SOC Analyst
Job Type: Full Time, Permanent
Location: Onsite – Delhi
Experience Required: 1-3 Yrs
Skills Required:
1) Working knowledge across various security appliances (e.g., Firewall, WAF, Web Security Appliance, Email Security Appliance, Antivirus).
2) Experience with SOC Operations tools like SIEM, NDR, EDR, UEBA, SOAR, etc.
3) Strong analytical and problem-solving skills, with a deep understanding of cybersecurity principles, attack vectors, and threat intelligence.
4) Knowledge of network protocols, security technologies, and the ability to analyze and interpret security logs and events to identify potential threats.
5) Scripting skills (e.g., Python, Bash, PowerShell) for automation and analysis purposes.
6) Skilled in evaluating and integrating inputs from people, processes, and technologies to identify effective solutions.
7) Demonstrate a thorough understanding of the interdependencies between these elements and leverages this knowledge to develop comprehensive, efficient, and sustainable problem-solving strategies.
8) Excellent communication skills to articulate complex technical concepts to non-technical stakeholders and collaborate effectively with team members.
9) Ability to prioritize and manage multiple tasks in a dynamic environment.
10) Willingness to stay updated with the latest cybersecurity trends and technologies.
Job Responsibilities:
1) Continuously monitor and Analyze security alerts and logs to identify potential incidents. Analyze network traffic patterns to detect anomalies and identify potential security breaches.
2) Implement correlation rules and create playbooks as per requirements. Continuously update and suggest new rules and playbooks based on the latest attack vectors and insights from public articles and cybersecurity reports.
3) Use security compliance and scanning solutions to conduct assessments and validate the effectiveness of security controls and policies. Suggest improvements to enhance the overall security posture.
4) Utilize deception security solutions to deceive and detect potential attackers within the network.
5) Leverage deep expertise in networking, system architecture, operating systems, virtual machines (VMs), servers, and applications to enhance cybersecurity operations.
6) Work effectively with cross-functional teams to implement and maintain robust security measures. Conduct thorough forensic analysis of security incidents to determine root causes and impact.
7) Assist with all phases of incident response. Develop and refine incident response strategies and procedures to address emerging cyber threats.
8) Perform digital forensics to understand attack vectors and impact. Swiftly respond to and mitigate security threats, ensuring the integrity and security of organizational systems and data.
9) Professionally communicate and report technical findings, security incidents, and mitigation recommendations to clients.
About Company
Innspark is the fastest-growing Deep-tech Solutions company that provides next-generation products and services in Cybersecurity and Telematics. The Cybersecurity segment provides out-of-the-box solutions to detect and respond to sophisticated cyber incidents, threats, and attacks. The solutions are powered by advanced Threat Intelligence, Machine Learning, and Artificial Intelligence that provides deep visibility of the enterprise’s security.
We have developed and implemented solutions for a wide range of customers with highly complex environments including Government Organizations, Banks & Financial institutes, PSU, Healthcare Providers, Private Enterprises.
Website: https://innspark.in/
We seek a skilled AI Engineer with a background in AI research, machine learning, and data science to join our innovative team! This is an exciting opportunity for an individual with extensive AI experience eager to work on cutting-edge projects, turning research into practical AI applications. You will collaborate with cross-functional teams to design, develop, and implement AI solutions that enhance business operations and decision-making processes.
What We’re Looking For
● Education: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. A PhD in AI or a related discipline is highly desirable.
● Experience:
○ Proven experience in AI research and implementation, with a deep
understanding of both theoretical and practical aspects of AI.
○ Strong proficiency in machine learning (ML), data science, and deep learning techniques.
○ Hands-on experience with Python and ML libraries such as TensorFlow, PyTorch, Scikit-learn, etc.
○ Experience with data preprocessing, feature engineering, and data
visualization.
○ Familiarity with cloud platforms such as AWS, Azure, or Google Cloud for AI and ML deployment.
○ Strong analytical and problem-solving skills.
○ Ability to translate AI research into practical applications and solutions.
○ Knowledge of AI model evaluation techniques and performance optimization.
○ Strong communication skills for presenting research and technical details to non- technical stakeholders.
○ Ability to work independently and in team environments.
Preferred Qualifications
● Experience working with natural language processing (NLP), computer vision, or reinforcement learning.
What You’ll Be Doing
● Conduct advanced AI research to develop innovative solutions for business challenges.
● Design, develop, and train machine learning models, including supervised, unsupervised, and deep learning algorithms.
● Collaborate with data scientists to analyze large datasets, identify patterns, and extract actionable insights.
● Work with software development teams to integrate AI models into production environments.
● Leverage state-of-the-art AI tools, frameworks, and libraries to accelerate AI development.
● Optimize AI models for accuracy, performance, and scalability in real-world applications.
● Collaborate closely with cross-functional teams, including product managers, software engineers, and data scientists, to implement AI-driven solutions.
● Document AI research outcomes, development processes, and performance metrics. Present findings to stakeholders in an easily understandable manner.
Responsibilities
· Design, develop, and deploy full-stack web applications using React.js and Python-Django.
· Build and optimize back-end systems, APIs, and database management solutions.
· Deploy and manage scalable and secure applications on AWS infrastructure.
· Perform code reviews, testing, and debugging to ensure high-quality, production-ready code.
· Collaborate with cross-functional teams to gather requirements and translate them into technical solutions.
· Take full ownership of assigned projects, ensuring timely and successful delivery.
· Implement and maintain CI/CD pipelines and DevOps practices.
Requirements
· Proven experience (2-4 years) with React.js and Python-Django in a professional setting.
· Strong expertise in full-stack development, including front-end, back-end, and database management.
· Proficiency with AWS services (e.g., EC2, S3,Fargate, Lambda) for deploying and managing applications.
· Experience with version control systems like Git.
· Solid understanding of best practices in software development, including Agile methodologies.
· Strong problem-solving skills, attention to detail, and ability to handle complex technical challenges.
· Excellent communication skills and a collaborative mindset.
Location: Hyderabad, Hybrid
We seek an experienced Lead Data Scientist to support our growing team of talented engineers in the Insurtech space. As a proven leader, you will guide, mentor, and drive the team to build, deploy, and scale machine learning models that deliver impactful business solutions. This role is crucial for ensuring the team provides real-world, scalable results. In this position, you will play a key role, not only by developing models but also by leading cross-functional teams, structuring and understanding data, and providing actionable insights to the business.
What We’re Looking For
● Proven experience as a Lead Machine Learning Engineer, Data Scientist, or a similar role, with extensive leadership experience managing and mentoring engineering teams.
● Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or a related field.
● Demonstrated success in leading teams that have designed, deployed, and scaled machine learning models in production environments, delivering tangible business results.
● Expertise in machine learning algorithms, deep learning, and data mining techniques, ideally in an enterprise setting.
● Hands-on experience with natural language processing (NLP) and data extraction tools.
● Proficiency in Python (or other programming languages), and with libraries such as Scikit-learn, TensorFlow, PyTorch, etc.
● Strong experience with MLOps processes, including model development, deployment, and monitoring at scale.
● Strong leadership and analytical skills, capable of structuring and interpreting data to provide meaningful business insights and guide decisions.
● A practical understanding of aligning machine learning initiatives with business objectives.
● Industry experience in insurance is a plus but not required.
What You’ll Be Doing
● Lead and mentor a team of strong Machine Learning Engineers, synthesizing their skills and ideas into scalable, impactful solutions.
● Have a proven track record in successfully leading teams in the development, deployment, and scaling of machine learning models to solve real-world business problems.
● Provide leadership in structuring, analyzing, and interpreting data, delivering insights to the business side to support informed decision-making.
● Oversee the full lifecycle of machine learning models – from development and training to deployment and monitoring in production environments.
● Collaborate closely with cross-functional teams to align AI efforts with business goals
and ensure measurable impacts.
● Leverage deep expertise in machine learning, data science, or related fields to enhance document interpretation tasks such as text extraction, classification, and semantic analysis.
● Utilize natural language processing (NLP) to improve our AI-based Intelligent Document Processing platform.
● Implement MLOps best practices, ensuring smooth deployment, monitoring, and maintenance of models in production.
● Continuously explore and experiment with new technologies and techniques to push the boundaries of AI solutions within the organization.
● Lead by example, fostering a culture of innovation and collaboration within the team while effectively communicating technical insights to both technical and non-technical stakeholders.
Candidates not located in Hyderabad will not be taken into consideration.
About koolio.ai
Website: www.koolio.ai
Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Internship Position
We are looking for a motivated Backend Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.
Key Responsibilities:
- Assist in the development and maintenance of backend systems and APIs.
- Write reusable, testable, and efficient code to support scalable web applications.
- Work with cloud services and server-side technologies to manage data and optimize performance.
- Troubleshoot and debug existing backend systems, ensuring reliability and performance.
- Collaborate with cross-functional teams to integrate frontend features with backend logic.
Requirements and Skills:
- Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Good understanding of server-side technologies like Python
- Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
- Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
- Knowledge of version control systems such as Git.
- Soft Skills:
- Eagerness to learn and adapt in a fast-paced environment.
- Strong problem-solving and critical-thinking skills.
- Effective communication and teamwork capabilities.
- Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.
Why Join Us?
- Gain real-world experience working on a cutting-edge platform.
- Work alongside a talented and passionate team committed to innovation.
- Receive mentorship and guidance from industry experts.
- Opportunity to transition to a full-time role based on performance and company needs.
This internship is an excellent opportunity to kickstart your career in backend development, build critical skills, and contribute to a product that has a real-world impact.
Job Title: Senior Fullstack Engineer - Fintech
Location: Mumbai
Company: TIFIN Give
About Us:
TIFIN Give is an early growth-stage fintech company offering a modern, tech-driven Donor Advised Fund to empower charitable giving. We have a lean, hungry team building a feature-rich product and automated operations platform, through which we are servicing multiple enterprise clients. Our roadmap and deal pipeline is full to the bursting, and we’re looking for someone to join our team in this exciting stage. As we continue to scale, we’re looking for a Senior Fullstack Engineer to lead our engineering teams and drive the successful delivery of our ambitious product roadmap.
OUR VALUES: Go with your GUT
- Grow at the Edge: We strive for personal growth by stepping out of our comfort zones and finding our genius zones with self-awareness and integrity. No excuses.
- Understanding through Listening and Speaking the Truth: We value transparency, communicating with radical candor, authenticity, and precision to create shared understanding. We challenge, but once a decision is made, we commit fully.
- I Win for Teamwin: We work within our genius zones to succeed, taking full ownership of our work. We inspire each other with our energy and attitude, flying in formation to win together.
Role Overview:
Experienced Full Stack Developer with 8 years of hands-on expertise in designing, developing, and deploying web applications.
- Proficient in backend development using Python, with strong skills in frameworks such as Django and FastAPI.
- Skilled in frontend development using ReactJS, ensuring seamless user interfaces and responsive design.
- Skilled at SQL database management, with a focus on performance optimization and data integrity.
- Capable of contributing to system design and architecture discussions, ensuring scalability, reliability, and security.
- Strong communicator with a track record of collaborating effectively within cross-functional teams and delivering high-quality solutions.
- Open to transitioning into Data Engineering roles leveraging strong SQL skills and understanding of data pipelines.
- Good debugging skills.
Key Skills:
- Backend Development: Extensive experience in Python-based backend development with Django and FastAPI frameworks. Proficient in building RESTful APIs, handling authentication, authorization, and data validation.
- Frontend Development: Skilled in frontend technologies, particularly ReactJS, for creating dynamic and responsive user interfaces. Familiar with state management libraries like Redux and context API.
- Database Management: Strong command over SQL for designing schemas, writing complex queries, and optimizing database performance. Experience with ORM frameworks like Django ORM.
- System Design: Understanding of system design principles, including scalability, performance optimization, and microservices architecture. Ability to contribute to architectural decisions and technical design discussions.
- Data Engineering: Open to roles in Data Engineering, with skills in SQL, data pipelines, ETL processes, and data warehousing concepts.
- Communication: Effective communicator with experience in team collaboration, client interactions, and articulating technical concepts to non-technical stakeholders. Proven ability to work in Agile development environments.
Note: Completely IC Role and Hands-on coding is mandatory.
Additional Skills (Good to Have):
- Experience with Next.js for server-side rendering and building React applications.
- Familiarity with Snowflake / Redshift for cloud-based data warehousing and analytics.
- Knowledge of data manipulation and analysis tools such as Pandas and NumPy.
- Exposure to Salesforce platform APIs and extensions or similar CRM functionalities.
Technical Proficiency:
- Backend: Python, Django, FastAPI, Flask, RESTful APIs, GraphQL
- Frontend: JavaScript, ReactJS, Redux, HTML5, CSS3, Responsive Design
- Databases: PostgreSQL, MySQL, SQLite, MongoDB, ORM (Django ORM)
- Data Engineering: SQL, ETL Processes, Data Warehousing Concepts
- Tools & DevOps: Git, Docker, AWS (EC2, S3, RDS), CI/CD pipelines, Agile methodologies
Professional Experience:
- Developed and maintained scalable web applications using Django and FastAPI, ensuring high performance and reliability.
- Designed and implemented frontend components and user interfaces using ReactJS, enhancing user experience and interactivity.
- Optimized SQL queries and database schema design to improve application performance and data integrity.
- Collaborated with cross-functional teams in Agile environments to deliver features and meet project milestones.
- Open to transitioning into Data Engineering roles, leveraging SQL skills and understanding of data pipelines to contribute to data-driven solutions.
Compensation and Benefits Package:
- Competitive compensation with a discretionary annual bonus.
- Performance-linked variable compensation.
- Medical insurance.
Note on Location:
While we have team centers in Boulder, New York City, San Francisco, Charlotte, this role is based out of Mumbai
TIFIN is an equal-opportunity workplace, and we value diversity in our workforce. All qualified applicants will receive consideration for employment without regard to any discrimination.
First & foremost, this role is not for you if you don’t enjoy solving really deeeep-tech problems with a high surface area, which means that no person would fit in for solving the complete problem (we know that) & there’ll be a lot of things to learn on the way, read further if you’re only interested in something of this sort.
We are building a Social Network for Fashion & E-commerce powered by AI with the vision of redefining how people buy things on the internet! We believe we are one of the very few companies that are here to solve a real consumer problem & not just cash on to the AI hype. We’ve raised $1M in pre-seed funding from top VCs & supported by the best entrepreneurs of India.
As a founding software engineer, your major role would be to take full ownership of development & scaling of backend, collaborating with AI-ML engineers to develop scalable pipelines. You be deeply involved in Software Development, Testing, Integration, Partner collaboration and technical Design and Innovation aligned with product strategy.
Now, What we’re looking for & our expectations
Note: We don’t really care about your qualifications as long as we can see that you have sufficient knowledge required for the role & a strong sense of ownership in whatever you do.
- 3-5 yrs+ experience with python & concurrency, having experience with golang is a plus
- Taking ownership of the complete backend & development cycles, having great knowledge of frontend is a plus
- Be core part of code and design reviews and make sure our products bar is high on coding and design standards and on Engineering fundamentals
- Taking high level decisions on designing systems across all engineering
- Represent in the Engineering Shiprooms and make technical tradeoffs.
- Hands on development for critical platform features in partnership with ICs.
A few things about us
- Building the next 100 people $100 Billion AI-first Company
- Speed of execution is really important to us
- Delivering exceptional experiences that exceed user expectations
- We embrace a Culture of continuous learning and innovation
- Users are at the heart of everything we do
- We believe in open communication, authenticity, and transparency
- Solving problems through First Principles
Benefits of joining us
- Top of the market Compensation
- Any hardware you need to do your best work
- Open to hybrid work (although we prefer in-person over remote to maximise learning)
- Flexible work timings
- Learning & Development budget
- Flexible allowances with hassle-free reimbursements
- Quarterly team outings covered by us
- First Hand experience of shipping world class products
- Having loads of fun with a GenZ team at a Hacker House
First & foremost, this role is not for you if you don’t enjoy solving really deeeep-tech problems with a high surface area, which means that no person would fit in for solving the complete problem (we know that) & there’ll be a lot of things to learn on the way, read further if you’re only interested in something of this sort.
We are building a Social Network for Fashion & E-commerce powered by AI with the vision of redefining how people buy things on the internet! We believe we are one of the very few companies that are here to solve a real consumer problem & not just cash on to the AI hype. We’ve raised $1M in pre-seed funding from top VCs & supported by the best entrepreneurs of India.
As a founding AI-ML Engineer, you will build & train foundation models from scratch, fine-tune existing models as per the use-case, scraping large sums of data, design pipelines for data ingestion, processing & scalability of systems. Not just this, you’ll also be working on recommendation systems, particularly algorithm design & planning the execution from day one.
Now, What we’re looking for & our expectations:
Note: We don’t really care about your qualifications as long as we can see that you have sufficient knowledge required for the role & a strong sense of ownership in whatever you do.
- Design and deploy advanced machine learning models in Computer Vision including object detection & similarity matching
- Implement scalable data pipelines, optimize models for performance and accuracy, and ensure they are production-ready with MLOps
- Take part in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices
- In terms of technical skills, you should have a high proficiency in Python, machine learning frameworks like TensorFlow & PyTorch
- Experience with cloud platforms and knowledge of deploying models in production environments
- Have decent understanding of Reinforcement Learning and some understanding of Agentic AI & LLM-Ops
- First-hand experience with scalable Backend Engineering would put you on the first consideration over other candidates
A few things about us
- Building the next 100 people $100 Billion AI-first Company
- Speed of execution is really important to us
- Delivering exceptional experiences that exceed user expectations
- We embrace a Culture of continuous learning and innovation
- Users are at the heart of everything we do
- We believe in open communication, authenticity, and transparency
- Solving problems through First Principles
Benefits of joining us
- Top of the market Compensation
- Any hardware you need to do your best work
- Open to hybrid work (although we prefer in-person over remote to maximise learning)
- Flexible work timings
- Learning & Development budget
- Flexible allowances with hassle-free reimbursements
- Quarterly team outings covered by us
- First Hand experience of shipping world class products
- Having loads of fun with a GenZ team at a Hacker House
Location: Hyderabad (Work From Office)
TechnoIdentity is on the lookout for talented and driven individuals to join our IT team as Dev Trainee. If you’re passionate about programming, enthusiastic about learning, and ready to take on challenges to deliver exceptional solutions, this opportunity is for you.
We value team members who:
- Demonstrate curiosity and eagerness to adopt new technologies.
- Exhibit logical thinking and a keen problem-solving ability.
- Are proficient in at least one programming language and have hands-on experience with writing and debugging code.
- Communicate effectively, both verbally and in writing.
- Are passionate learners with a desire to create meaningful solutions that address real-world business problems.
- Have a solid understanding of user experience and design principles.
Responsibilities:
As a Trainee Software Engineer, you will:
- Work on building and maintaining both frontend and backend components of web applications.
- Collaborate with cross-functional teams to ensure seamless integration of application components.
- Write clean, efficient, and well-documented code, adhering to good coding practices.
- Develop unit tests and perform debugging to ensure quality and functionality.
- Stay up-to-date with emerging trends and technologies in software development.
Requirements:
- Proficiency in at least one programming language such as Python, Java, JavaScript, or HTML/CSS.
- Familiarity with frontend frameworks like ReactJS (or experience in TypeScript) is a significant advantage.
- Basic understanding of backend technologies (e.g., Node.js, Django, Flask) is a plus.
- Knowledge of the software development life cycle (SDLC) and web development workflows.
- Ability to analyze and break down real-world problems with an eye for creating practical solutions.
- Exposure to version control tools like Git is a bonus.
Preferred Skills:
- Hands-on experience with modern frontend libraries/frameworks (e.g., ReactJS, Angular).
- Understanding of REST APIs and how frontend interacts with backend systems.
- Familiarity with database systems (e.g., MySQL, MongoDB).
- Strong analytical skills coupled with a creative approach to problem-solving.
Why Join Us?
- Opportunity to work with cutting-edge technologies in a collaborative environment.
- Supportive culture that encourages continuous learning and growth.
- A chance to develop innovative solutions that make an impact.
If you’re a tech enthusiast ready to embark on a rewarding career, we’d love to hear from you! Apply today and take the first step toward becoming part of our dynamic team.
Job Title: AI/ML Engineer
Location: Pune
Experience Level: 1-5 Years
Job Type: Full-Time
About Us
Vijay Sales is one of India’s leading retail brands, offering a wide range of electronics and home appliances across multiple channels. As we continue to expand, we are building advanced technology solutions to optimise operations, improve customer experience, and drive growth. Join us in shaping the future of retail with innovative AI/ML-powered solutions.
Role Overview
We are looking for an AI/ML Engineer to join our technology team and help drive innovation across our business. In this role, you will design, develop, and implement machine learning models for applications like inventory forecasting, pricing automation, customer insights, and operational efficiencies. Collaborating with a cross-functional team, you’ll ensure our AI/ML solutions deliver measurable impact.
Key Responsibilities
- Develop and deploy machine learning models to address business challenges such as inventory forecasting, dynamic pricing, demand prediction, and customer segmentation.
- Preprocess and analyze large volumes of sales and customer data to uncover actionable insights.
- Design algorithms for supervised, unsupervised, and reinforcement learning tailored to retail use cases.
- Implement and manage pipelines to deploy and monitor models in production environments.
- Continuously optimize model performance through retraining, fine-tuning, and feedback loops.
- Work closely with business teams to identify requirements and translate them into AI/ML solutions.
- Stay current with the latest AI/ML advancements and leverage them to enhance Vijay Sales’ technology stack.
Qualifications
Required:
- Bachelor’s/Master’s degree in Computer Science, Data Science, Mathematics, or a related field.
- Proficiency in Python and ML frameworks such as TensorFlow, PyTorch, or Scikit-learn.
- Proven experience in developing, training, and deploying machine learning models.
- Strong understanding of data processing, feature engineering, and data pipeline design.
- Knowledge of algorithms for forecasting, classification, clustering, and optimization.
- Experience working with large-scale datasets and databases (SQL/NoSQL).
Preferred:
- Familiarity with retail industry challenges, such as inventory and pricing management.
- Experience with cloud platforms (AWS, GCP, or Azure) for deploying ML solutions.
- Knowledge of MLOps practices for scalable and efficient model management.
- Hands-on experience with time-series analysis and demand forecasting models.
- Understanding of customer analytics and personalization techniques.
Why Join Vijay Sales?
- Work with one of India’s most iconic retail brands as we innovate and grow.
- Be part of a team building transformative AI/ML solutions for the retail industry.
- A collaborative work environment that encourages creativity and learning.
- Competitive salary and benefits package, along with exciting growth opportunities.
Ready to make an impact? Apply now and help shape the future of Vijay Sales with cutting-edge AI/ML technologies!
ABOUT THE TEAM:
The production engineering team is responsible for the key operational pillars (Reliability, Observability, Elasticity, Security, and Governance) of the Cloud infrastructure at Swiggy. We thrive to excel & continuously improve on these key operational pillars. We design, build, and operate Swiggy’s cloud infrastructure and developer platforms, to provide a seamless experience to our internal and external consumers.
What qualities are we looking for:
10+ years of professional experience in infrastructure, production engineering
Strong design, debugging, and problem-solving skills
Proficiency in at least one programming language like Python, GoLang or Java.
B Tech/M Tech in Computer Science or equivalent from a reputed college.
Hands-on experience with AWS and Kubernetes or similar cloud/infrastructure platforms
Hands-on with DevOps principles and practices ( Everything-as-a-code, CI/CD, Test everything, proactive monitoring etc)
Deep understanding of OS/virtualization/Containerization, network protocols & concepts
Exposure to modern-day infrastructure technologies, expertise in building and operating distributed systems.
Hands-on coding on any of the languages like Python or GoLang.
Familiarity with software engineering practices including unit testing, code reviews, and design documentation.
Technically mentor and lead the team towards engineering and operational excellence
Act like an owner, strive for excellence.
What will you get to do here?
Be part of a Culture where Customer Obsession, Ownership, Teamwork, Bias for Action and Insist on High standards are a way of life
Coming up with the best practices to help the team achieve their technical tasks and continually thrive in improving the technology of the team
Be a hands-on engineer, ensure frameworks/infrastructure built is well designed, scalable & are of high quality.
Build and/or operate platforms that are highly available, elastic, scalable, operable and observable
Experiment with new & relevant technologies and tools, and drive adoption while measuring yourself on the impact you can create.
Implementation of long-term technology vision for the team.
Build/Adapt and implement tools that empower the Swiggy engineering teams to self-manage the infrastructure and services owned by them.
You will identify, articulate, and lead various long-term tech vision, strategies, cross-cutting initiatives and architecture redesigns.
Design systems and make decisions that will keep pace with the rapid growth of Swiggy. Document your work and decision-making processes, and lead presentations and discussions in a way that is easy for others to understand.
Creating architectures & designs for new solutions around existing/new areas. Decide technology & tool choices for the team.
· Designing and building data pipelines
Guiding customers on how to architect and build data engineering pipelines on Snowflake leveraging Snowpark, UDF etc for IOT Streaming analytics
· Developing code
Writing code in familiar programming languages, such as Python, and executing it within the Snowflake Data Cloud
· Data Science
Perform complex time series-based analyses for predictive and prescriptive maintenance and then later perform backpropagation based on Deep Learning (ANN, RNN etc) to uncover additional algorithms and embed them into the MLOPs pipeline
· Analyzing and testing software
Performing complex analysis, design, development, testing, and debugging of computer software
· Creating documentation
Creating repeatable processes and documentation as a result of customer engagement
· Developing best practices
Developing best practices, including ensuring knowledge transfer so that customers are properly enabled
· Working with stakeholders
Working with appropriate stakeholders to define system scope and objectives and establish baselines
· Querying and processing data
Querying and processing data with a DataFrame object
· Converting lambdas and functions
Converting custom lambdas and functions to user-defined functions (UDFs) that you can call to process data
· Writing stored procedures
Writing a stored procedure that you can call to process data, or automate with a task to build a data pipeline
Mammoth
Mammoth is a data management platform revolutionizing the way people work with data. Our lightweight, self-serve SaaS analytics solution takes care of data ingestion, storage, cleansing, transformation, and exploration, empowering users to derive insights with minimal friction. Based in London, with offices in Portugal and Bangalore, we offer flexibility in work locations, whether you prefer remote or in-person office interactions.
Job Responsibilities
- Collaboratively build robust, scalable, and maintainable software across the stack.
- Design, develop, and maintain APIs and web services.
- Dive into complex challenges around performance, scalability, concurrency, and deliver quality results.
- Improve code quality through writing unit tests, automation, and conducting code reviews.
- Work closely with the design team to understand user requirements, formulate use cases, and translate them into effective technical solutions.
- Contribute ideas and participate in brainstorming, design, and architecture discussions.
- Engage with modern tools and frameworks to enhance development processes.
Skills Required
Frontend:
- Proficiency in Vue.js and a solid understanding of JavaScript.
- Strong grasp of HTML/CSS, with attention to detail.
- Knowledge of TypeScript is a plus.
Backend:
- Strong proficiency in Python and mastery of at least one framework (Django, Flask, Pyramid, FastAPI or litestar).
- Experience with database systems such as PostgreSQL.
- Familiarity with performance trade-offs and best practices for backend systems.
General:
- Solid understanding of fundamental computer science concepts like algorithms, data structures, databases, operating systems, and programming languages.
- Experience in designing and building RESTful APIs and web services.
- Ability to collaborate effectively with cross-functional teams.
- Passion for solving challenging technical problems with innovative solutions.
- AI/ML or Willingness to learn on need basis.
Nice to have:
- Familiarity with DevOps tools like Docker, Kubernetes, and Ansible.
- Understanding of frontend build processes using tools like Vite.
- Demonstrated experience with end-to-end development projects or personal projects.
Job Perks
- Free lunches and stocked kitchen with fruits and juices.
- Game breaks to unwind.
- Work in a spacious and serene office located in Koramangala, Bengaluru.
- Opportunity to contribute to a groundbreaking platform with a passionate and talented team.
If you’re an enthusiastic developer who enjoys working across the stack, thrives on solving complex problems, and is eager to contribute to a fast-growing, mission-driven company, apply now!
“I DESIGN MY LIFE” is an Online Business Consulting/ Coaching Company headed by renowned Business Coach – Sumit Agarwal. We provide online consulting and trainings to Business Owners of SMEs, MSMEs across India.
You can find more about us here: https://idesignmylife.net/careers/
This is a hands-on position. The role will have the following aspects:
POSITION: Software Developer
LOCATION: Full time(permanent) work from home opportunity
LANGUAGES: JavaScript, MySQL, Python, Erp Next, HTML, CSS, and Bootstrap
ROLE : We are looking for people who can
- Code well
- Have written complex software
- Self-starters - Can read the docs and don't need hand-holding
- Experience in Python/Javascript/jQuery/Vue/MySQL will be a plus
- Functional knowledge of ERP will be a plus
Basic Qualifications
- BE / B.Tech - IT/ CS
- 1 / 2+ years of professional experience
- Strong C# and SQL skills
- Strong skills in React and TypeScript
- Familiarity with AWS services or experience working in other cloud computing environments.
- Experience with SQL Server and PostgreSQL.
- Experience with automated unit testing frameworks.
- Experience in designing and implementing REST APIs & micro services-based solutions.
Job Title: Data Analyst-Fintech
Job Description:
We are seeking a highly motivated and detail-oriented Data Analyst with 2 to 4 years of work experience to join our team. The ideal candidate will have a strong analytical mindset, excellent problem-solving skills, and a passion for transforming data into actionable insights. In this role, you will play a pivotal role in gathering, analyzing, and interpreting data to support informed decision-making and drive business growth.
Key Responsibilities:
1. Data Collection and Extraction:
§ Gather data from various sources, including databases, spreadsheets and APIs,
§ Perform data cleansing and validation to ensure data accuracy and integrity.
2. Data Analysis:
§ Analyze large datasets to identify trends, patterns, and anomalies.
§ Conduct analysis and data modeling to generate insights and forecasts.
§ Create data visualizations and reports to present findings to stakeholders.
3. Data Interpretation and Insight Generation:
§ Translate data insights into actionable recommendations for business improvements.
§ Collaborate with cross-functional teams to understand data requirements and provide data-driven solutions.
4. Data Quality Assurance:
§ Implement data quality checks and validation processes to ensure data accuracy and consistency.
§ Identify and address data quality issues promptly.
Qualifications:
1. Bachelor's degree in a relevant field such as Computer Science, Statistics, Mathematics, or a related discipline.
2. Proven work experience as a Data Analyst, with 2 to 4 years of relevant experience.
3. Knowledge of data warehousing concepts and ETL processes is advantageous.
4. Proficiency in data analysis tools and languages (e.g., SQL, Python, R).
5. Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.
6. Strong analytical and problem-solving skills.
7. Excellent communication and presentation skills.
8. Attention to detail and a commitment to data accuracy.
9. Familiarity with machine learning and predictive modeling is a bonus.
If you are a data-driven professional with a passion for uncovering insights from complex datasets and have the qualifications and skills mentioned above, we encourage you to apply for this Data Analyst position. Join our dynamic team and contribute to making data-driven decisions that will shape our company's future.
Fatakpay is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
at Tech Prescient
Job Position: Senior Technical Lead
Desired Skills: Python, Flask/FastAPI, MySQL/PostgreSQL, NoSQL, AWS, JavaScript, Angular/React
Experience Range: 8 - 10 Years
Type: Full Time
Location: India (Pune)
Availability: Immediate to 30 Days
Job Description: Tech Prescient is looking for an experienced and proven Technical Lead (Python/Flask/FastAPI/React/AWS/Azure Cloud) who has worked on the modern full stack to deliver software products and solutions. He/She should have experience in leading from the front, handling customer situations, internal teams, anchoring project communications and delivering outstanding work experience to our customers.
In specific, below are some of the must-have skills and experiences to fulfill the job requirements -
- 8+ years of relevant software design and development experience building cloud native applications using Python and JavaScript stack.
- Thorough understanding of deploying to at least one of the Cloud platforms (AWS or Azure). Knowledge of Kubernetes is an added advantage.
- Experience with Microservices architecture and server less deployments
- Well-versed with RESTful services and building scalable API architectures using any of the Python frameworks.
- Hands-on with Frontend technologies using either Angular or React
- Experience managing distributed delivery teams, tech leadership, ideating with the customer leadership, design discussions and code reviews to deliver quality software products
- Good attitude and passion to learn new technologies on the job.
- Good communication and leadership skills. Ability to lead the internal team as well as customer communication (email/calls)
- Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred.
Leverage your expertise in Python to design and implement distributed systems for web scraping, ensuring robust data extraction from diverse web sources.
• Develop and optimize scalable, self-healing scraping frameworks, integrated with AI tools for intelligent automation of the data collection process.
• Implement monitoring, logging, and alerting mechanisms to ensure high availability and performance of distributed web scraping systems.
• Work with large-scale NoSQL databases (e.g., MongoDB) to store and query scraped data efficiently.
• Collaborate with cross-functional teams to research and implement innovative AI-driven solutions for data extraction and automation.
• Ensure data integrity and security while interacting with various web sources.
Required Skills: • Extensive experience with Python and web frameworks like Flask, FastAPI, or Django.
• Experience with AI tools and machine learning libraries to enhance and automate scraping processes
. • Solid understanding of building and maintaining distributed systems, with hands-on experience in parallel programming (multithreading, asynchronous, multiprocessing).
• Working knowledge of asynchronous queue systems like Redis, Celery, RabbitMQ, etc., to handle distributed scraping tasks.
• Proven experience with web mining, scraping tools(e.g., Scrapy, BeautifulSoup, Selenium), and handling dynamic content.
• Proficiency in working with NoSQL data storage systems like MongoDB, including querying and handling large datasets.
• Knowledge of working with variousfront-end technologies and how various websites are built
Role: Software Engineer (Integration)
Work Experience: Min. 2 years
About the Role
We are seeking a versatile Software Engineer (Integration) who thrives on solving complex technical challenges and connecting disparate systems with elegance and efficiency.
Responsibilities
● Design, develop, and maintain robust integration solutions between multiple systems
● Create scalable endpoints and implement efficient Cron jobs
● Develop and optimize integration scripts using Python and JavaScript
● Interface with legacy and modern systems, including SOAP and REST APIs
● Perform comprehensive system mapping and data transformation
● Architect solutions that bridge different technological ecosystems
● Collaborate across teams to ensure seamless system interoperability
Technical Skills Requirement
1. Programming Languages:
● Expert-level JavaScript and TypeScript
● Python scripting capabilities
● Working knowledge of React
2. Technical Expertise:
● Backend development proficiency
● SQL and NoSQL database integration
● REST and SOAP API implementation
● Endpoint design and optimization
● System integration architecture
Soft Skills Requirements:
● Exceptional problem-solving abilities
● Strong client communication skills
● Ability to ask precise, targeted technical questions
● Adaptable and quick-learning approach
● Detail-oriented with a holistic system understanding
Educational Qualifications:
● Bachelor's degree in Computer Science, or a related field
● 2-5 years of professional experience
● Proven track record of successfully connecting complex systems
● Demonstrated ability to work across technological boundaries
Employment Type: Full-time
Work days: 5 days a week
Location: Udyog Vihar, Gurgaon
One of the most reputed service groups in Oman
Job Description: PEL Data Analyst
Job Title PEL Data Analyst / PEL ADMIN OFFICER
About the Group
Our Client is one of the most reputed service groups in Oman’s construction and mining industry. The organization has grown from a small family
business to one that leads the industry in construction contracting, manufacturing of cement products, building finishes products, roads, asphalt &
infrastructure works and mining amongst other product offerings. The Group has achieved this by basing everything it does on its core values of HSE
& Quality. With a diverse team of over 22,000 employees, The Group endeavours to serve the Sultanate with international quality products &
services to meet the demands of the growing nation.
Purpose of the job
Responsible for all day-to-day activities supporting the improvement of organizational effectiveness by working with extended teams & functions to
develop & implement; high-quality reports, dashboards and performance indicators for the PEL Division.
Key Responsibilities & Activities:
1. Responsible for data modeling and analysis.
2. Responsible for deep data dives and extremely detailed analysis of various data points collected during the need identification phase.
Organize and present data to line management for appropriate guidance and subsequent process mapping.
3. Responsible for rigorous periodic reviews of various individual, functional, business unit, and group metrics and indicators of success.
Through but not limited to – key productivity indicators, result areas, heath meter, performance goals etc. Report results to line
management and support the decision-making process.
4. Responsible for the development of high-quality analytical reports, and training packs.
5. Responsible for all individual and specific departmental productivity targets, financial objectives, KPIs and attainment thereof.
Other tasks:
1. Promoting The Group Values, Code of Conduct and associated policies
2. Participating and providing positive contributions to technical, commercial, planning, safety and quality performance of the organization
and by Client, Contractual, and regulatory requirements
3. Visiting Sites and Project locations to discuss operational aspects and carry out training.
4. Undertaking any other responsibilities as directed and mutually agreed with the Line Management.
The above list is not exhaustive. Individuals may be required to perform additional job-related tasks and duties as assigned.
Educational Qualifications
• Bachelor’s degree in engineering
• Masters (preferred)
Professional Certifications • Certifications related to Data Analyst or Data Scientist (preferred)
Skills
• Advanced Excel, Macros, & MS Power Query and MS Power BI, Google Data Studio (Looker Studio)
• Knowledge of Python, M Code and DAX is a plus
• Data management, Bigdata, Data analysis
• Attention to details and strong analytical skills.
• Strong data management skills
Experience • 4 years position specific experience
• 7 years overall professional experience
Language
• English (fluent)
• Arabic (preferred)
- Advanced Linux / Unix support experience required.
- · Strong shell scripting and python programming skills for SRE related activities required.
- · Understanding on Veritas Cluster Service, Load Balancers, VMWare and Splunk required.
- · Knowledge on ITIL principles required.
- · Effective oral and written communication skills, and interpersonal skills to work well in a team environment required.
- · Strong organizational and coordination skills with the ability to manage multiple tasks and high-pressure situations for outage handling, management or resolution.
- · Be available for weekend work.
Overall 4+ years of IT experience
Minimum 4 years of hands-on experience in Python and backend development with exposure to cloud technologies
Key Skills & Responsibilities:
Python Proficiency:
Strong hands-on experience in Python with knowledge of frameworks like Django or Flask.
Ability to write clean, efficient, and maintainable code.
Cloud Exposure:
Familiarity with cloud platforms (AWS, Azure, or GCP).
Hands-on experience with basic services like virtual machines, serverless functions (e.g., AWS Lambda), or storage solutions.
Backend Development:
Proficiency in developing and consuming RESTful APIs.
Knowledge of additional communication protocols (e.g., GraphQL, WebSockets) is a plus.
Database Management:
Experience with relational databases (e.g., PostgreSQL, MySQL).
Basic knowledge of NoSQL databases like MongoDB or DynamoDB.
System Design & Problem-Solving:
Ability to understand and implement scalable backend solutions.
Basic exposure to distributed systems or event-driven architectures (e.g., Kafka, RabbitMQ) is a bonus.
Collaboration & Communication:
Good communication skills to work effectively in a team.
Willingness to learn and adapt to new technologies.
Security & Best Practices:
Awareness of secure coding practices and API security (e.g., OAuth, JWT).
Good-to-Have Skills:
Basic understanding of full-stack development, including front-end technologies (React, Angular, Vue.js).
Familiarity with containerization (Docker) and CI/CD pipelines.
Knowledge of caching strategies using Redis or Memcached.
About the Role:
We are seeking a skilled Python Backend Developer to join our dynamic team. This role focuses on designing, building, and maintaining efficient, reusable, and reliable code that supports both monolithic and microservices architectures. The ideal candidate will have a strong understanding of backend frameworks and architectures, proficiency in asynchronous programming, and familiarity with deployment processes. Experience with AI model deployment is a plus.
Overall 5+ years of IT experience with minimum of 5+ Yrs of experience on Python and in Opensource web framework (Django) with AWS Experience.
Key Responsibilities:
- Develop, optimize, and maintain backend systems using Python, Pyspark, and FastAPI.
- Design and implement scalable architectures, including both monolithic and microservices.
-3+ Years of working experience in AWS (Lambda, Serverless, Step Function and EC2)
-Deep Knowledge on Python Flask/Django Framework
-Good understanding of REST API’s
-Sound Knowledge on Database
-Excellent problem-solving and analytical skills
-Leadership Skills, Good Communication Skills, interested to learn modern technologies
- Apply design patterns (MVC, Singleton, Observer, Factory) to solve complex problems effectively.
- Work with web servers (Nginx, Apache) and deploy web applications and services.
- Create and manage RESTful APIs; familiarity with GraphQL is a plus.
- Use asynchronous programming techniques (ASGI, WSGI, async/await) to enhance performance.
- Integrate background job processing with Celery and RabbitMQ, and manage caching mechanisms using Redis and Memcached.
- (Optional) Develop containerized applications using Docker and orchestrate deployments with Kubernetes.
Required Skills:
- Languages & Frameworks:Python, Django, AWS
- Backend Architecture & Design:Strong knowledge of monolithic and microservices architectures, design patterns, and asynchronous programming.
- Web Servers & Deployment:Proficient in Nginx and Apache, and experience in RESTful API design and development. GraphQL experience is a plus.
-Background Jobs & Task Queues: Proficiency in Celery and RabbitMQ, with experience in caching (Redis, Memcached).
- Additional Qualifications: Knowledge of Docker and Kubernetes (optional), with any exposure to AI model deployment considered a bonus.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in backend development using Python and Django and AWS.
- Demonstrated ability to design and implement scalable and robust architectures.
- Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred:
- Experience with Docker/Kubernetes for containerization and orchestration.
- Exposure to AI model deployment processes.
Sarvaha would like to welcome talented Software Development Engineer in Test (SDET) with minimum 5 years of experience to join our team. As an SDET, you will champion the quality of the product and will design, develop, and maintain modular, extensible, and reusable test cases/scripts. This is a hands-on role which requires you to work with automation test developers and application developers to enhance the quality of the products and development practices. Please visit our website at http://www.sarvaha.com to know more about us.
Key Responsibilities
- Understand requirements through specification or exploratory testing, estimate QA efforts, design test strategy, develop optimal test cases, maintain RTM
- Design, develop & maintain a scalable test automation framework
- Build interfaces to seamlessly integrate testing with development environments.
- Create & manage test setups that prioritize scalability, remote accessibility and reliability.
- Automate test scripts, create and execute relevant test suites, analyze test results and enhance existing or build newer scripts for coverage. Communicate with stakeholders for requirements, troubleshooting etc; provide visibility into the works by sharing relevant reports and metrics
- Stay up-to-date with industry best practices in testing methodologies and technologies to advise QA and integration teams.
Skills Required
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (Software Engineering preferred).
- Minimum 5+ years of experience in testing enterprise-grade/highly scalable, distributed applications, products, and services.
- Expertise in manual and Automation testing with excellent understanding of test methodologies and test design techniques, test life cycle.
- Strong programming skills in Typescript and Python, with experience using Playwright for building hybrid/BDD frameworks for Website and API automation
- Very good problem-solving and analytical skills.
- Experience in databases, both SQL and No-SQL.
- Practical experience in setting up CI/CD pipelines (ideally with Jenkins).
- Exposure to Docker, Kubernetes and EKS is highly desired.
- C# experience is an added advantage.
- A continuous learning mindset and a passion for exploring new technologies.
- Excellent communication, collaboration, quick learning of needed language/scripting and influencing skills.
Position Benefits
- Competitive salary and excellent growth opportunities within a dynamic team.
- Positive and collaborative work environment with the opportunity to learn from talented colleagues.
- Highly challenging and rewarding software development problems to solve.
- Hybrid work model with established remote work options.
About Streamlyn
Streamlyn is a regional leader in adtech, specializing in enhancing monetization for Publishers through innovative and compelling ad products. Our ad tech engagement solutions suite empowers Publishers to elevate their business and monetize their content effectively. With a vast network of premium publisher partners across Asia, Streamlyn reaches over 100 million consumers monthly.
Job Overview
As the Tech Lead, you will play a critical role in defining the future of our advanced ad tech platform. This position requires full technical ownership of a high-impact project, where you’ll architect and implement a scalable, high-performance system while guiding a dedicated team of developers to advance our product roadmap.
In this role, you’ll balance strategic technical oversight with hands-on involvement. You will combine strategic technical leadership with hands-on involvement to oversee the platform’s scalability, reliability, and technical evolution. Success will depend on a combination of ad tech expertise, strategic vision, and strong leadership as you drive innovation and foster a high-performance engineering culture.
Roles and Responsibilities
- Independent Project Leadership: Lead a dedicated project stream within our ad tech platform, from initial architecture and design through to delivery. Take full accountability for your project’s technical vision, resource planning, and execution, ensuring it meets performance and scalability requirements.
- Strategic Technical Ownership: Define and implement a technical roadmap that aligns with Streamlyn’s goals, focusing on scalability, resilience, and innovation. Ensure project components are optimized for real-time bidding, ad serving, and data processing requirements unique to ad tech.
- Performance & Scalability: Architect a platform capable of processing billions of requests daily with minimal latency. Utilize advanced load balancing, caching, and database technologies to achieve optimal performance and reliability.
- Team Building & Mentorship: Lead a specialized team of developers, fostering an environment of technical excellence and innovation. Set high standards for code quality, maintainability, and performance while mentoring team members and encouraging growth.
- Product Innovation & Cross-Functional Collaboration: Work with product, operations, and sales teams to ensure the products and platform meet high standards and unique demands. Drive the development of cutting-edge features like custom ad units and contextual targeting to keep Streamlyn competitive.
- Ad Tech Expertise: Apply a deep understanding of ad tech—DSPs, SSPs, oRTB protocols, and header bidding frameworks—to ensure seamless integration and adherence to industry best practices.
- Team and Process Development: Drive agile methodologies and efficient development processes within your team to support responsive and high-quality delivery. Implement continuous improvement practices that optimize team productivity and platform reliability.
Qualifications
- Experience: 5+ years in software development with hands-on ad tech experience, ideally in building ad servers, header bidding solutions, oRTB (Open Real-Time Bidding) integrations, and data pipelines.
- Technical Proficiency -
- Languages: Advanced knowledge in Java and Python.
- Databases: Strong experience with MariaDB, PostgreSQL, Aerospike, and familiarity with other NoSQL technologies.
- Data Pipeline: Proven experience designing and implementing data pipelines for handling large-scale ad data in real-time, including ingestion, processing, and transformation.
- Infrastructure and Cloud: Proficiency with AWS Cloud services, including EMR, EKS, and Docker Containers. Experience with Apache Spark, Kafka, and MKS.
- Ad Tech Ecosystem Knowledge: Strong understanding of the ad tech landscape, including DSPs, SSPs, header bidding, programmatic advertising, and RTB protocols.
- Architectural Expertise: Advanced skills in designing distributed, microservices-based architectures and developing APIs for high-performance, low-latency applications. Advanced knowledge in load balancing, caching, and database technologies to enhance performance and scalability.
- System Monitoring & Optimization: Hands-on experience with monitoring and optimization tools, such as Prometheus, Grafana, or similar for system reliability.
- Data Engineering: Experience with ETL (Extract, Transform, Load) processes, data warehousing, and building data flows for analytical and reporting purposes.
- Agile and Project Management: Experience with Agile methodologies and the ability to prioritize and manage complex projects.
- Leadership Skills: Demonstrated ability to lead a team, with experience in mentorship, team building, and technical coaching. Comfortable taking ownership and accountable for team deliverables.
- Communication: Excellent communication and collaboration skills, with the ability to articulate complex technical issues and facilitate cross-functional team discussions.
About Us:
Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
Our mission is to serve the underserved MSME businesses with their credit needs in India. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap through a phygital model (physical branches + digital decision-making).
As a technology and data-first company, tech lovers and data enthusiasts play a crucial role in building the analytics & tech at Optimo that helps the company thrive.
What We Offer:
Join our dynamic startup team as a Senior Data Analyst and play a crucial role in core data analytics projects involving credit risk, lending strategy, credit underwriting features analytics, collections, and portfolio management. The analytics team at Optimo works closely with the Credit & Risk departments, helping them make data-backed decisions.
This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in analytics and technology bring out the best in you and help us build the best for the company. This environment offers you a steep learning curve and an opportunity to experience the direct impact of your analytics contributions. Along with this, we offer industry-standard compensation.
What We Look For:
We are looking for individuals with a strong analytical mindset and a fundamental understanding of the lending industry, primarily focused on credit risk. We value not only your skills but also your attitude and hunger to learn, grow, lead, and thrive, both individually and as part of a team. We encourage you to take on challenges, bring in new ideas, implement them, and build the best analytics systems. Your willingness to put in the extra hours to build the best will be recognized.
Skills/Requirements:
- Credit Risk & Underwriting: Fundamental knowledge of credit risk and underwriting processes is mandatory. Experience in any lending financial institution is a must. A thorough understanding of all the features evaluated in the underwriting process like credit report info, bank statements, GST data, demographics, etc., is essential.
- Analytics (Python): Excellent proficiency in Python - Pandas and Numpy. A strong analytical mindset and the ability to extract actionable insights from any analysis are crucial. The ability to convert the given problem statements into actionable analytics tasks and frame effective approaches to tackle them is highly desirable.
- Good to have but not mandatory: REST APIs: A fundamental understanding of APIs and previous experience or projects related to API development or integrations. Git: Proficiency in version control systems, particularly Git. Experience in collaborative projects using Git is highly valued.
What You'll Be Working On:
- Analyze data from different data sources, extract information, and create action items to tackle the given open-ended problems.
- Build strong analytics systems and dashboards that provide easy access to data and insights, including the current status of the company, portfolio health, static pool, branch-wise performance, TAT (turnaround time) monitoring, and more.
- Assist the credit and risk team with insights and action items, helping them make data-backed decisions and fine-tune the credit policy (high involvement in the credit and underwriting process).
- Work on different rule engines that automate the underwriting process end-to-end.
Other Requirements:
- Availability for full-time work in Bangalore. Immediate joiners are preferred.
- Strong passion for analytics and problem-solving.
- At least 1 year of industry experience in an analytics role, specifically in a lending institution, is a must.
- Self-motivated and capable of working both independently and collaboratively.
If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.
Greetings from Analyttica Datalab. We are currently seeking an Analytics Manager to join our team!
If you are looking for a place to grow your skills and share our values this can be the right opportunity for you.
- Role: Analytics Manager
- Relevant Experience: 7-9 Years
- Type of Role: Full-Time
- Notice Period:1 Month or Less (Immediate Joiners-Preferred)
- Mandatory Skills: Data Analytics, Marketing Analytics, A/B Testing, SQL, Logistic Regression, Segmentation, Customer Analytics, Fintech, Python
- Work Location: Bangalore
About Us
We are fast growing technology enabled advanced analytics and data science start-up. We have clients in USA and India and those two geographies are our current market focus. We are developing
an Artificial Intelligence (AI) embedded data science and analytics product platform (patented in US) with a series of offerings to solve real world problems. We have a world-class team of in-house
technocrats, external consultants, SMEs, data scientists, and experienced business professionals driving the product journey.
What Can You Expect?
Along with the commercial aspects of the engagement (no bar for the right candidate), you can expect an agile environment, with progressive culture and state of the art workplace. You will be presented with a great learning opportunity, while interacting with people from diverse background with the depth and breadth of experiences. Our technical, SME, and business leadership team comprises of each professional with 15+ years of experience and strong credibility to their names.
What Role You Are Expected To Play?
As a Product Analytics Manager focused on growth, you’ll be responsible for guiding the Product Analytics feature and experimentation roadmap which may include but is not limited to go-to-market strategy for partnerships and application of analytics for optimizing the acquisition funnel strategy for our client. Specifically,
you will be responsible for driving higher originations by converting web and app visitors to long-term customers.
Roles and Responsibilities:
- Ability to work hands-on on the top prioritized and strategic initiatives to drive business outcomes.
- Lead and mentor a team of sr. analysts and analysts, fostering a culture of collaboration and data-driven decision-making.
- Work closely with US stakeholders and manage expectations on project delivery, team management and capacity management.
- Develop and implement measurement strategies across marketing and product initiatives, ensuring data accuracy and integrity.
- Conduct in-depth analyses of marketing and product performance, identifying key trends, opportunities, and areas for improvement.
- Design and execute A/B tests to optimize marketing campaigns and product features.
- Prior experience in lifecycle marketing and product cross-selling is preferred.
- Develop and maintain dashboards and reports to track key metrics and provide actionable insights to stakeholders.
- Collaborate closely with cross-functional teams (marketing, product, engineering) to understand business needs and translate data into actionable recommendations.
- Stay up to date on industry trends and best practices in analytics and data management.
- Own the delivery in-take, time management and prioritization of the tasks for your team, leveraging JIRA
- Prior experience working for a US based Fintech would be preferred.
Essential Experience and Qualification:
- Minimum 7 years of experience in marketing and/or product analytics.
- Bachelor's degree in a quantitative field (e.g., Statistics, Mathematics, Computer Science, Economics). Master's degree is a plus.
- Proven expertise in SQL and data management.
- Hands-on experience with Python and advanced analytics techniques (e.g., regression analysis, classification techniques, predictive modeling).
- Hands-on experience with data visualization tools (e.g., Tableau, Looker).
- Hands-on exposure to Git & JIRA for project management.
- Strong understanding of A/B testing methodologies and statistical analysis.
- Excellent communication, presentation, and interpersonal skills.
- Ability to manage multiple projects and deadlines effectively.
- Coach and mentor team members on refining analytical approach, time management & prioritization.
- Prior experience working in a fast paced environment and working with US stakeholders
People & Culture
As a company, we believe in working hard and
letting our passion drive us to dream high and
achieve higher. We inspire innovation through
freedom of thought and creativity, and pursue
excellence while ensuring that our integrity
and ethical foundation remain strong.
Honesty, determination, and perseverance
form the core of our value system. Analyttica’s
versatile talent fulfils our value proposition by
taking on fresh challenges and delivering
solutions that blend skill and intuition (science
and art). Understanding the customer/partner
mindset helps us provide technology enabled
scientific solutions at scale for sustainable
business impact.
Industry
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Job Description: Data Engineer (Fintech Firm)
Position: Data Engineer
Experience: 2-4 Years
Location: Mumbai-Andheri
Employment Type: Full-Time
About Us:
We are a dynamic fintech firm dedicated to revolutionizing the financial services industry through innovative data solutions. We believe in leveraging cutting-edge technology to provide superior financial products and services to our clients. Join our team and be a part of this exciting journey.
Job Overview:
We are looking for a skilled Data Engineer with 3-5 years of experience to join our data team. The ideal candidate will have a strong background in ETL processes, data pipeline creation, and database management. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data systems and pipelines.
Key Responsibilities:
- Design and develop robust and scalable ETL processes to ingest and process large datasets from various sources.
- Build and maintain efficient data pipelines to support real-time and batch data processing.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Optimize database performance and ensure data integrity and security.
- Troubleshoot and resolve data-related issues and provide support for data operations.
- Implement data quality checks and monitor data pipeline performance.
- Document technical solutions and processes for future reference.
Required Skills and Qualifications:
- Bachelor's degree in Engineering, or a related field.
- 3-5 years of experience in data engineering or a related role.
- Strong proficiency in ETL tools and techniques.
- Experience with SQL and relational databases (e.g., MySQL, PostgreSQL).
- Familiarity with big data technologies
- Proficiency in programming languages such as Python, Java, or Scala.
- Knowledge of data warehousing concepts and tools
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Knowledge of machine learning and data science principles.
- Experience with real-time data processing and streaming platforms (e.g., Kafka).
What We Offer:
- Competitive compensation package (12-20 LPA) based on experience and qualifications.
- Opportunity to work with a talented and innovative team in the fintech industry..
- Professional development and growth opportunities.
This requirement is for Data Engineer in Gurugram for Data Analytics Project.
Building ETL/ELT pipelines of data from various sources using SQL/Python/Spark
Ensuring that data are modelled and processed according to architecture and requirements both
functional and non-functional
Understanding and implementing required development guidelines, design standards and best
practices Delivering right solution architecture, automation and technology choices
Working cross-functionally with enterprise architects, information security teams, and platform
teams Suggesting and implementing architecture improvements
Experience with programming languages such as Python or Scala
Knowledge of Data Warehouse, Business Intelligence and ETL/ELT data processing issues
Ability to create and orchestrate ETL/ELT processes in different tools (ADF, Databricks
Workflows) Experience working with Databricks platform: workspace, delta lake, workflows, jobs,
Unity Catalog Understanding of SQL and relational databases
Practical knowledge of various relational and non-relational database engines in the cloud (Azure
SQL Database, Azure Cosmos DB, Microsoft Fabric, Databricks)
Hands-on experience with data services offered by Azure cloud
Knowledge of Apache Spark (Databricks, Azure Synapse Spark Pools)
Experience in performing code review of ETL/ELT pipelines and SQL queries Analytical approach
to problem solvin
We're seeking an experienced Backend Software Engineer to join our team.
As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop.
This includes APIs, databases, and server-side logic.
Responsibilities:
- Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
- Write clean, efficient, and well-documented code that adheres to industry standards and best practices
- Participate in code reviews and contribute to the improvement of the codebase
- Debug and resolve issues in the existing codebase
- Develop and execute unit tests to ensure high code quality
- Work with DevOps engineers to ensure seamless deployment of software changes
- Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
- Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
- Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements:
- At least 2+ years of experience building scalable and reliable backend systems
- Strong proficiency in either of the programming languages such as Python, Node.js, Golang, RoR
- Experience with either of the frameworks such as Django, Express, gRPC
- Knowledge of database systems such as MySQL, PostgreSQL, MongoDB, Cassandra, or Redis
- Familiarity with containerization technologies such as Docker and Kubernetes
- Understanding of software development methodologies such as Agile and Scrum
- Ability to demonstrate flexibility wrt picking a new technology stack and ramping up on the same fairly quickly
- Bachelor's/Master's degree in Computer Science or related field
- Strong problem-solving skills and ability to collaborate effectively with cross-functional teams
- Good written and verbal communication skills in English
We’re looking for a Tech Lead with expertise in ReactJS (Next.js), backend technologies, and database management to join our dynamic team.
Key Responsibilities:
- Lead and mentor a team of 4-6 developers.
- Architect and deliver innovative, scalable solutions.
- Ensure seamless performance while handling large volumes of data without system slowdowns.
- Collaborate with cross-functional teams to meet business goals.
Required Expertise:
- Frontend: ReactJS (Next.js is a must).
- Backend: Experience in Node.js, Python, or Java.
- Databases: SQL (mandatory), MongoDB (nice to have).
- Caching & Messaging: Redis, Kafka, or Cassandra experience is a plus.
- Proven experience in system design and architecture.
- Cloud certification is a bonus.
at Phonologies (India) Private Limited
Job Description
Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.
Role & Responsibilities
- Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
- LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
- Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
- Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
- Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
- Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.
Preferred Candidate Profile
- Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
- Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
- Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
- AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
- Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
- MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
- Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
- Education: Degree in Computer Science, Data Engineering, or related field.
Perks and Benefits
- Competitive Compensation: INR 20L to 30L per year.
- Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.
We are seeking an experienced Senior Golang Developer to join our dynamic engineering team at our Hyderabad office (Hybrid option available).
What You'll Do:
- Collaborate with a team of engineers to design, develop, and support web and mobile applications using Golang.
- Work in a fast-paced agile environment, delivering high-quality solutions focused on continuous innovation.
- Tackle complex technical challenges with creativity and out-of-the-box thinking.
- Take ownership of critical components and gradually assume responsibility for significant portions of the product.
- Develop robust, scalable, and performant backend systems using Golang.
- Contribute to all phases of the development lifecycle, including design, coding, testing, and deployment.
- Build and maintain SQL and NoSQL databases to support application functionality.
- Document your work and collaborate effectively with cross-functional teams, including QA, engineering, and business units.
- Work with global teams to architect solutions, provide estimates, reduce complexity, and deliver a world-class platform.
Who Should Apply:
- 5+ years of experience in backend development with a strong focus on Golang.
- Proficient in building and deploying RESTful APIs and microservices.
- Experience with SQL and NoSQL databases (e.g., MySQL, MongoDB).
- Familiarity with cloud platforms such as AWS and strong Linux skills.
- Hands-on experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of system design principles, scalability, and high availability.
- Exposure to frontend technologies like React or mobile development is a plus.
- Experience working in an Agile/Scrum environment.
CoinCROWD
Full Stack Engineer,
Full time- remote
In a rapidly evolving technological landscape, cryptocurrencies are transforming the global financial system. At CoinCrowd, we’re not just keeping pace—we’re at the forefront! Our cutting-edge platform is redefining the digital economy by offering advanced payment solutions, optimizing wallet portfolio management, and enhancing security measures.
Your Role:
As a Full stack Developer, you will be central to our mission. This is a full-time, remote position.
What You’ll Do:
● Strategic Vision: Develop and implement the technical vision, strategy, and roadmap for CoinCrowd's cryptocurrency platform.
● Platform Excellence: Lead the design, development, and maintenance of our platform, ensuring it is secure, performant, and scalable.
● Collaborate: Work with product managers and stakeholders to define and prioritize engineering projects.
● Stay Cutting-Edge: Keep up with industry trends and emerging technologies to ensure our platform remains at the forefront.
● Best Practices: Promote and implement best practices, coding standards, and efficient development processes.
● Resource Management: Oversee budgets, resources, and timelines for engineering projects.
● Focus on Security: Ensure strong security, compliance, and data protection practices.
About You:
● Experience: 5+ years in technology.
● Technical Skills: Proficient in node.js or Python, React, Flutter, PostgreSQL, Elastic, and GCP.
● Industry Knowledge: Experience in fintech or crypto, with a strong understanding of blockchain and cryptocurrency.
● Problem Solver: Excellent technical and problem-solving abilities.
● Agile Expert: Skilled in agile development methodologies. Your Superpowers:
● Blockchain Expert: Strong knowledge of blockchain platforms, wallets, and distributed ledger technologies.
● Framework Proficiency: Experience with Ethereum, Hyperledger, or Corda.
● Security Knowledge: In-depth understanding of cryptographic principles and secure coding practices.
● Excellent Communicator: Ability to work effectively with stakeholders at all levels.
● Integration Skills: Experience integrating blockchain solutions with external systems and APIs.
● Educational Background: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
What's in It for You:
This role offers equity-only compensation, giving you a stake in CoinCrowd's success. It’s a unique opportunity to be part of something significant and see your efforts directly impact our growth!
Join us at CoinCrowd and be part of the crypto revolution!
Job Description:
We are seeking a skilled Quant Developer with strong programming expertise and an interest in financial markets. You will be responsible for designing, developing, and optimizing trading algorithms, analytics tools, and quantitative models. Prior experience with cryptocurrencies is a plus but not mandatory.
Key Responsibilities:
• Develop and implement high-performance trading algorithms and strategies.
• Collaborate with quantitative researchers to translate models into robust code.
• Optimize trading system performance and latency.
• Maintain and improve existing systems, ensuring reliability and scalability.
• Work with market data, conduct analysis, and support trading operations.
Required Skills:
• Strong proficiency in Python and/or C++.
• Solid understanding of data structures, algorithms, and object-oriented programming.
• Familiarity with financial markets or trading systems (experience in crypto is a bonus).
• Experience with distributed systems, databases, and performance optimization.
• Knowledge of numerical libraries (e.g., NumPy, pandas, Boost) and statistical analysis.
Preferred Qualifications:
• Experience in developing low-latency systems.
• Understanding of quantitative modeling and back testing frameworks.
• Familiarity with trading protocols (FIX, WebSocket, REST APIs).
• Interest or experience in cryptocurrencies and blockchain technologies.
Cleantech Industry Resources
Core Responsibilities
Front-end Development:
o Develop web applications using React, flask. o Implement responsive design to ensure user-friendly interfaces.
o Utilize mapping libraries (e.g., Leaflet, Mapbox) to display geospatial data.
Back-end Integration:
o Work with APIs and manage data flow for project analytics and management.
o Enhance features for robust data handling and real-time project analysis.
Cloud Deployment:
o Deploy and manage web applications on AWS and similar platforms. o Optimize for performance, scalability, and storage efficiency.
GIS and Mapping Tools:
o Develop tools using GIS data like KML and Shapefiles.
Job Title - Cloud Fullstack Engineer
Experience Required - 5 Years
Location - Mumbai
Immediate joiners are preferred.
About the Job
As a Cloud Fullstack Engineer you will design, develop, and maintain end-to-end solutions for cloud-based applications. The Cloud Fullstack Engineer will be responsible for building both frontend and backend components, integrating them seamlessly, and ensuring they work efficiently within a cloud infrastructure.
What You’ll Be Doing
- Frontend Development
- Design and implement user-friendly and responsive web interfaces using modern frontend technologies (e.g., React, Angular, Vue.js).
- Ensure cross-browser compatibility and mobile responsiveness of web applications.
- Collaborate with UX/UI designers to translate design specifications into functional and visually appealing interfaces.
- Backend Development
- Develop scalable and high-performance backend services and APIs to support frontend functionalities.
- Design and manage cloud-based databases and data storage solutions.
- Implement authentication, authorization, and security best practices in backend services.
- Cloud Integration
- Build and deploy cloud-native applications using platforms such as AWS, Google Cloud Platform (GCP), or Azure.
- Leverage cloud services for computing, storage, and networking to enhance application performance and scalability.
- Implement and manage CI/CD pipelines for seamless deployment of applications and updates.
- End-to-End Solution Development
- Architect and develop fullstack applications that integrate frontend and backend components efficiently.
- Ensure data flow between frontend and backend is seamless and secure.
- Troubleshoot and resolve issues across the stack, from UI bugs to backend performance problems.
- Performance Optimization
- Monitor and optimize application performance, including frontend load times and backend response times.
- Implement caching strategies, load balancing, and other performance-enhancing techniques.
- Conduct performance testing and address bottlenecks and scalability issues.
- Security and Compliance
- Implement security best practices for both frontend and backend components to protect against vulnerabilities.
- Ensure compliance with relevant data protection regulations and industry standards.
- Conduct regular security assessments and audits to maintain application integrity.
- Collaboration and Communication
- Work closely with cross-functional teams, including product managers, designers, and other engineers, to deliver high-quality solutions.
- Participate in code reviews, technical discussions, and project planning sessions.
- Document code, processes, and architecture to facilitate knowledge sharing and maintainability.
- Continuous Improvement
- Stay updated with the latest trends and advancements in frontend and backend development, as well as cloud technologies.
- Contribute to the development of best practices and standards for fullstack development within the team.
- Participate in knowledge-sharing sessions and provide mentorship to junior engineers.
What We Need To See
- Strong experience in both frontend and backend development, as well as expertise in cloud technologies and services.
- Experience in fullstack development, with a strong focus on both frontend and backend technologies.
- Proven experience with cloud platforms (AWS, GCP, Azure) and cloud-native application development.
- Experience with modern frontend frameworks (e.g., React, Angular, Vue.js) and backend technologies (e.g., Node.js, Java, Python).
- Technical Expertise:
1. FrontEnd
- Handon Experience with HTML5, CSS, JavaScript, ReactJs, Next.js, redux, JQuery
2. Proficiency in Backend Development
- Strong experience with backend programming languages such as Node.js, Python
- Expertise in working with frameworks such as NestJS, Express.js, or Django.
3. Microservices Architecture
- Experience designing and implementing microservices architectures.
- Knowledge of service discovery, API gateways, and distributed tracing.
4. API Development
- Proficiency in designing, building, and maintaining RESTful and GraphQL APIs.
- Experience with API security, rate limiting, and authentication mechanisms (e.g., JWT, OAuth).
5. Database Management
- Strong knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g. MongoDB).
- Experience in database schema design, optimization, and management.
6. Cloud Services
- Hands-on experience with cloud platforms such as Azure,AWS or Google Cloud.
- Security: Knowledge of security best practices and experience implementing secure coding practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Ability to manage multiple priorities and work in a fast-paced, dynamic environment.
Key Responsibilities:
- Designing, developing, and maintaining AI/NLP-based software solutions.
- Collaborating with cross-functional teams to define requirements and implement new features.
- Optimizing performance and scalability of existing systems.
- Conducting code reviews and providing constructive feedback to team members.
- Staying up-to-date with the latest developments in AI and NLP technologies.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field.
- 6+ years of experience in software development, with a focus on Python programming.
- Strong understanding of artificial intelligence and natural language processing concepts.
- Hands-on experience with AI/NLP frameworks such as Llamaindex/Langchain, OpenAI, etc.
- Experience in implementing Retrieval-Augmented Generation (RAG) systems for enhanced AI solutions.
- Proficiency in building and deploying machine learning models.
- Excellent problem-solving skills and attention to detail.
- Strong communication and interpersonal skills.
Preferred Qualifications:
- Master's degree or higher in Computer Science, Engineering, or related field.
- Experience with cloud computing platforms such as AWS, Azure, or Google Cloud.
- Familiarity with big data technologies such as Hadoop, Spark, etc.
- Contributions to open-source projects related to AI/NLP
Senior Backend Developer
Job Overview: We are looking for a highly skilled and experienced Backend Developer who excels in building robust, scalable backend systems using multiple frameworks and languages. The ideal candidate will have 4+ years of experience working with at least two backend frameworks and be proficient in at least two programming languages such as Python, Node.js, or Go. As an Sr. Backend Developer, you will play a critical role in designing, developing, and maintaining backend services, ensuring seamless real-time communication with WebSockets, and optimizing system performance with tools like Redis, Celery, and Docker.
Key Responsibilities:
- Design, develop, and maintain backend systems using multiple frameworks and languages (Python, Node.js, Go).
- Build and integrate APIs, microservices, and other backend components.
- Implement real-time features using WebSockets and ensure efficient server-client communication.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Optimize backend systems for performance, scalability, and reliability.
- Troubleshoot and debug complex issues, providing efficient and scalable solutions.
- Work with caching systems like Redis to enhance performance and manage data.
- Utilize task queues and background job processing tools like Celery.
- Develop and deploy applications using containerization tools like Docker.
- Participate in code reviews and provide constructive feedback to ensure code quality.
- Mentor junior developers, sharing best practices and promoting a culture of continuous learning.
- Stay updated with the latest backend development trends and technologies to keep our solutions cutting-edge.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience.
- 4+ years of professional experience as a Backend Developer.
- Proficiency in at least two programming languages: Python, Node.js, or Go.
- Experience working with multiple backend frameworks (e.g., Express, Flask, Gin, Fiber, FastAPI).
- Strong understanding of WebSockets and real-time communication.
- Hands-on experience with Redis for caching and data management.
- Familiarity with task queues like Celery for background job processing.
- Experience with Docker for containerizing applications and services.
- Strong knowledge of RESTful API design and implementation.
- Understanding of microservices architecture and distributed systems.
- Solid understanding of database technologies (SQL and NoSQL).
- Excellent problem-solving skills and attention to detail.
- Strong communication skills, both written and verbal.
Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with GraphQL and other modern API paradigms.
- Familiarity with task queues, caching, or message brokers (e.g., Celery, Redis, RabbitMQ).
- Understanding of security best practices in backend development.
- Knowledge of automated testing frameworks for backend services.
- Familiarity with version control systems, particularly Git.
Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.
Wishlist’s mission is to amplify company performance by igniting the power of people. We understand that companies don’t build a business, they build people, and the people build the business. Our rewards and recognition platform takes human psychology, digitizes it, and then connects people to business performance.
We accomplish our mission through our values:
- Build the House
- Be Memorable
- Win Together
- Seek Solutions
We are looking for a talented and experienced machine learning engineer to build efficient, data-driven artificial intelligence systems that advance our predictive automation capabilities. The candidate should be highly skilled in statistics and programming, with the ability to confidently assess, analyze, and organize large amounts of data as well as a deep understanding of data science and software engineering principles.
Responsibilities -
- Design, develop, and optimize recommendation systems, leveraging content-based, collaborative, and hybrid filtering techniques.
- Conduct end-to-end development of recommendation algorithms, from prototyping and experimentation to deployment and monitoring.
- Build and maintain data pipelines to support real-time and batch recommendation use cases.
- Implement KNN search, cosine similarity, semantic search, and other relevant techniques to improve recommendation accuracy.
- Utilize A/B testing and other experimental methods to validate the performance of recommendation models.
- Collaborate with cross-functional teams, including product managers, engineers, and designers, to define recommendation requirements and ensure effective integration.
- Monitor, troubleshoot, and improve recommendation models to ensure optimal system performance and scalability.
- Document models, experiments, and analysis results clearly and comprehensively.
- Stay up-to-date with the latest ML/AI and Gen AI trends, techniques, and tools, bringing innovative ideas to enhance our platform and improve the customer experience.
Required Skills -
- 3+ years of experience in building recommendation systems, with a strong understanding of content-based, collaborative filtering, and hybrid approaches.
- Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or a related field.
- Strong knowledge of machine learning techniques, statistics, and algorithms related to recommendations.
- Proficiency in Python, R, or similar programming languages.
- Hands-on experience with ML libraries and frameworks (e.g., TensorFlow, PyTorch, Scikit-Learn).
- Experience with KNN search, cosine similarity, semantic search, and other similarity measures.
- Familiarity with NLP techniques and libraries (e.g., spaCy, Hugging Face) to improve content-based recommendations.
- Hands-on experience deploying recommendation systems on cloud platforms like AWS, GCP, or Azure.
- Experience with MLflow or similar tools for model tracking and lifecycle management.
- Excellent problem-solving abilities and experience in data-driven experimentation and A/B testing.
- Strong communication skills and ability to work effectively in a collaborative team environment.
Benefits -
- Competitive salary as per the market standard.
- Directly work with the Executive Leadership.
- Join a great workplace & culture.
- Company-paid medical insurance for you and your family.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
Who We Are 🌟
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT’. Embracing Software Craftsmanship values and eXtreme Programming Practices, we create well-crafted products for our clients. We partner with large organizations to help modernize their legacy code bases and work with startups to launch MVPs, scale or as extensions of their team to efficiently operationalize their ideas. We love to work with folks who are passionate about creating exceptional software, are continuous learners, and are painstakingly fussy about quality.
Our Values 💡
•Relentless Pursuit of Quality with Pragmatism
•Extreme Ownership
•Proactive Collaboration
•Active Pursuit of Mastery
•Effective Feedback
•Client Success
What We’re Looking For 👀
We’re looking to hire software craftspeople and data engineers. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle including infrastructure technologies in the cloud
What You’ll Be Doing 💻
Collaborate with teams across the organization, including product managers, data engineers and business leaders to translate requirements into software solutions to process large amounts of data.
- Develop new ways to ensure ETL and data processes are running efficiently.
- Write clean, maintainable, and reusable code that adheres to best practices and coding standards.
- Conduct thorough code reviews and provide constructive feedback to ensure high-quality codebase.
- Optimize software performance and ensure scalability and reliability.
- Stay up-to-date with the latest trends and advancements in data processing and ETL development and apply them to enhance our products.
- Meet with product owners and other stakeholders weekly to discuss priorities and project requirements.
- Ensure deployment of new code is tested thoroughly and has business sign off from stakeholders as well as senior leadership.
- Handle all incoming support requests and errors in a timely manner and within the necessary time frame and timezone commitments to the business.
Location : Remote
Skills you need in order to succeed in this role
What you will bring:
- 7+ years of experience with Java 11+(required), managing and working in Maven projects
- 2+ years of experience with Python (required)
- Knowledge and understanding of complex data pipelines utilizing ETL processes (required)
- 4+ years of experience using relational databases and deep knowledge of SQL with the ability to understand complex data relationships and transformations (required)
- Knowledge and understanding of Git (required)
- 3+ year of experience with various GCP technologies
- Google Dataflow (Apache Beam SDK) (equivalent Hadoop technologies)
- BigQuery (equivalent of any data warehouse technologies: Snowflake, Azure DW, Redshift)
- Cloud Storage Buckets (equivalent to S3)
- GCloud CLI
- Experience with Apache Airflow / Google Composer
- Knowledge and understanding of Docker, Linux, Shell/Bash and virtualization technologies
- Knowledge and understanding of CI/CD methodologies
- Ability to understand and build UML diagrams to showcase complex logic
- Experience with various organization/code tools such as Jira, Confluence and GitHub
Bonus Points for Tech Enthusiasts:
- Infrastructure as Code technologies (Pulumi, Terraform, CloudFormation)
- Experience with observability and logging platforms (DataDog)
- Experience with DBT or similar technologies