



We're looking for AI/ML enthusiasts who build, not just study. If you've implemented transformers from scratch, fine-tuned LLMs, or created innovative ML solutions, we want to see your work!
Make Sure before Applying (GitHub Profile Required):
1. Your GitHub must include:
- At least one substantial ML/DL project with documented results
- Code demonstrating PyTorch/TensorFlow implementation skills
- Clear documentation and experiment tracking
- Bonus: Contributions to ML open-source projects
2. Pin your best projects that showcase:
- LLM fine-tuning and evaluation
- Data preprocessing pipelines
- Model training and optimization
- Practical applications of AI/ML
Technical Requirements:
- Solid understanding of deep learning fundamentals
- Python + PyTorch/TensorFlow expertise
- Experience with Hugging Face transformers
- Hands-on with large dataset processing
- NLP/Computer Vision project experience
Education:
- Completed/Pursuing Bachelor's in Computer Science or related field
- Strong foundation in ML theory and practice
Apply if:
- You have done projects using GenAI, Machine Learning, Deep Learning.
- You must have strong Python coding experience.
- Someone who is available immediately to start with us in the office(Hyderabad).
- Someone who has the hunger to learn something new always and aims to step up at a high pace.
We value quality implementations and thorough documentation over quantity. Show us how you think through problems and implement solutions!

About Caw Studios
About
Connect with the team
Similar jobs


We are seeking a skilled and innovative Developer with strong expertise in Scala, Java/Python and Spark/Hadoop to join our dynamic team.
Key Responsibilities:
• Design, develop, and maintain robust and scalable backend systems using Scala, Spark, Hadoop and expertise in Python/Java.
• Build and deploy highly efficient, modular, and maintainable microservices architecture for enterprise-level applications.
• Write and optimize algorithms to enhance application performance and scalability.
Required Skills:
• Programming: Expert in Scala and object-oriented programming.
• Frameworks: Hands-on experience with Spark and Hadoop
• Databases: Experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB).
• Location: Mumbai
• Employment Type: Full-time

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Able to build API'S
Should have production knowledge
Have deployed solutions in production.



1. As a founding backend engineer, design, build and evolve the backend architecture
of our product to support customer-facing and internal features to serve 1000s of
customers and millions of data requests per hour
2. You will engage in technical decisions from choosing the optimal database design to
best-practice infrastructure decisions
3. You will proactively incorporate best practices for security, analytics, and monitoring
4. You will contribute to DevOps by building procedures for deployment,
troubleshooting, and maintenance
5. You will work directly with the CTO to set the tone for the engineering culture
This role is tailor-made for you if
1. You have 4+ years of experience designing, building, and deploying production-level
large-scale web applications
2. You have hands-on experience with Node JS and other programming languages
(Ruby, Go, Python)
3. You have worked previously on PostgreSQL, SQL, and Message Queue 4. Practical
knowledge and experience in deploying and managing big data applications on a cloud
platform like AWS or Google Cloud.
5. You are comfortable conducting code reviews and giving feedback to ensure high
standards of code maintainability and extensibility.
6. You are energized by ambiguity and can create structure in a dynamic, fast-paced
environment
7. You’ve high confidence, low ego, and are generally a good human being :)
You earn brownie points if
1. You love Slack (we are a Slack first company)
2. You have been part of an early/mid-stage start-up before
3. You love TechCrunch. We are obsessed with reading and talking about startups
4. You have created some amazing open-source projects


● Designing, code reviews, POCs on latest cutting-edge technologies building,
deploying highly scalable and robust cloud based intelligent solutions.
● Architect & Design, code high performance and scalable solutions that meet the needs of millions of customers.
● Implement cutting-edge models and algorithms that operate on massive amounts of data.
● Working on the REST framework
● Working on session-based and token-based authentication, Working on Celery
● Handling the payment gateway
● Work with Multi-threading, Data Structures, Algorithm, Design Patterns and develop robust high-performance and scalable applications.
● Writing scalable code using Python programming language.
● Developing back-end components.
Job Summary:
We want a techie at heart. Someone who is happy and curious to work on all aspects of software development.
Reporting directly to the CTO, you will be responsible for feature design, development, and continuously optimizing our tech stack.
- We are looking for an experienced software engineer with at least 5 years of experience in a startup or product environment. Ideally you have been involved in all aspects of software development from requirements gathering to design, development, deployment and post-release support. We are looking for all-round technical maturity. Our tech stack is Angular, Spring boot and Django/Python.
Key Skills
Java
SpringBoot
PostgreSQL/MySQL
Git
AWS
REST api design
Experience integrating with external APIs
Good applied understanding of Object Oriented Programming
Good database modeling and SQL knowledge.
/React is a big plus.
Responsibilities and Duties
Build out features across the stack : backend, API design and integration, database optimization , microservices, plugins, queues etc
Fix bugs and write automated tests
Maintain and upgrade our Tech Stack
Translate requirements to design and write/present articulate software design.

Responsibilities:
- Writing reusable, testable, and efficient
- Design and implementation of low-latency, high-availability, and performant
- Integration of user-facing elements developed by front-end developers with server side
- Implementation of security and data
- Integration of data storage solutions may include databases, key-value stores, blob stores, etc. Experience administering innovation with methodologies such as design
- Experience working on Agile Scrum and DevOps aligned delivery
- Interest and ability to learn other coding languages as
- Strong communication skills and great product
- Proficient communication skills verbal and
- Understanding of accessibility and security compliance depending on the specific
- Knowledge of user authentication and authorization between multiple systems, servers, and environments.
- Understanding of fundamental design principles behind a scalable
- Familiarity with event-driven programming in
- Understanding of the differences between multiple delivery platforms, such as mobile vs desktop, and optimizing output to match the specific
- Able to create database schemas that represent and support business
- Strong unit test and debugging
Skills:
- Expert in Python, with knowledge of at least one Python web framework such as Django, Flask, etc depending on your technology
- Familiarity with some ORM (Object Relational Mapper)
- Able to integrate multiple data sources and databases into one
- Understanding of the threading limitations of Python, and multi-process
- Good understanding of server-side templating languages such as Jinja 2, Mako, etc depending on your technology
- Basic understanding of front-end technologies, such as JavaScript, HTML5, and
- Proficient understanding of code versioning tools such as Git, Mercurial or



- B Tech/BE or M.Tech/ME in Computer Science or equivalent from a reputed college.
- Experience level of 7+ years in building large scale applications.
- Strong problem solving skills, data structures and algorithms.
- Experience with distributed systems handling large amount of data.
- Excellent coding skills in Java / Python / Node / Go.
- Very good understanding of Web Technologies.
- Very good understanding of any RDBMS and/or messaging.

Locus is a global decision- making platform in the supply chain that uses deep learning and proprietary algorithms to provide route optimization, real-time tracking, insights and analytics, beat optimization, efficient warehouse management, vehicle allocation and utilization, intuitive 3D packing and measurement of packages. Locus automates human decisions required to transport a package or a person, between any two points on earth, delivering gains along efficiency, consistency, and transparency in operations.
Locus, which has achieved a peak of 1 million orders processed in a day (200,000 orders an hour) and is trained & tested on over 100 million+ order deliveries, works in 75 cities across the globe. Locus works with several large-scale market leaders like Urban Ladder, Tata Group of Companies, Droplet, Licious, Rollick, Lenskart, other global FMCG, pharma, e-commerce, 3PL and logistics conglomerates.
Locus is backed by some of the biggest names in the market and recently raised $22 MN in Series B funding and also $4 Mn in a pre-Series B round. Earlier, In 2016, Locus raised $2.75 Mn (INR 18.3 Cr) in a Series A funding.
Locus was started by Nishith Rastogi and Geet Garg, two ex-Amazon engineers on a mission to democratize logistics intelligence for businesses across industries. Nishith was profiled by Forbes Asia in their ’30 Under 30’ 2018 list. Geet, on the other hand, holds a dual degree (BTech and MTech) in Computer Science and Engineering from the Indian Institute of Technology. Our team constitutes of engineers from Indian Institute of Technology and Birla Institute of Technology & Science- Pilani, and Data Scientists with PhDs from Carnegie Mellon University and Tata Institute of Fundamental Research. Our multifaced product and business team is led by senior members from Barclays, Google & Goldman Sachs with immense operational execution experience.
Job Description
- Design & implement backend APIs at Locus.sh
- Mentor junior developers technically.
- Actively work to reduce tech debt in the Locus backend
- Work towards more stability & scalability of the backend
- Tech stack - Java, AWS, Aurora etc.
Eligibility
- 4-8 years of product company experience
- OOP implementation experience. Programming language does not matter, We use Java internally but have hired folks from non Java background.
- Hands on experience in SQL, Dynamo DB, Postgres etc preferred.
- Prior experience building REST APIs
- Advanced understanding of AWS stack
- Prior knowledge of solving problems at scale.
Perks:
- Healthy catered meals at office
- You decide your own Work From Home (WFH) and Out Of Office (OOO)
- Pet-friendly - bring your pets to the office any day. Locus family already has a Rottweiler and a Beagle




