
š Weāre Hiring: Senior Python Backend Developer š
š Location: Baner, Pune (Work from Office)
š° Compensation: ā¹6 LPA
š Experience Required: Minimum 2 years as a Python Backend Developer
About Us
Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.
We specialize in:
ā” Hyper-personalized fan engagement
š¤ AI-powered real-time photo sharing
šø Advanced media asset management
What Youāll Do
As a Senior Python Backend Developer, youāll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.
Architect and develop complex, secure, and scalable backend services
Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms
Optimize SQL & NoSQL databases for high performance
Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)
Implement observability, monitoring, and security best practices
Collaborate cross-functionally with product & AI teams
Mentor junior developers and conduct code reviews
Troubleshoot and resolve production issues with efficiency
What Weāre Looking For
ā Strong expertise in Python backend development
ā Solid knowledge of Data Structures & Algorithms
ā Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)
ā Proficiency in RESTful APIs & Microservice design
ā Knowledge of Docker, Kubernetes, and cloud-native systems
ā Experience managing AWS-based deployments
Why Join Us?
At Foto Owl AI, youāll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.

Similar jobs
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6ā10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- DesignĀ highly available, secure cloud infrastructure supporting distributed microservices at scale
- LeadĀ multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- ImplementĀ Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- OptimizeĀ Kafka-based data pipelines for throughput, fault tolerance, and low latency
- DeliverĀ zero-downtime CI/CD pipelines using GitOps-driven deployment models
- EstablishĀ SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- EnsureĀ production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
-
Preferred Education & Experience:
-
Bachelorās or masterās degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 5+ years of hands-on demonstrable experience with:
āŖ Object Oriented Modeling, Design, & Programming
āŖ Microservices Architecture, API Design, & Implementation
āŖ Relational, Document, & Graph Data Modeling, Design, & Implementation -
Well-versed in and hands-on demonstrable experience with:
āŖ Stream & Batch Big Data Pipeline Processing
āŖ Distributed Cloud Native Computing
āŖ Serverless Computing & Cloud Functions -
5+ years of hands-on development experience in Java programming.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Spring Boot, Apache Camel, Akka, etc.;
extra points if you can demonstrate your knowledge with working examples.
2+ years of hands-on development experience in one or more Relational and NoSQL datastores such as Amazon S3, Amazon DocumentDB, Amazon Elasticsearch Service, Amazon Aurora, AWS DynamoDB, Amazon Athena, etc. -
2+ years of hands-on development experience in one or more technologies such as Amazon Simple Queue Service, Amazon Kinesis, Apache Kafka, AWS Lambda, AWS Batch, AWS Glue, AWS Step Functions, Amazon API Gateway, etc.
-
2+ years of hands-on development experience in one or more technologies such as AWS Developer Tools, AWS Management & Governance, AWS Networking and Content Delivery, AWS Security, Identity, and Compliance, etc.
-
Well-versed in Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
-
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed with Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Ā Experience : 5+Years
-
Job Location : Remote/Pune
Azure DE
Primary Responsibilities -
- Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
- Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes
- Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations
- Use Azure Data Factory and Databricks to assemble large, complex data sets
- Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
- Ensure data security and compliance
- Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures
Required skills:
- Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams
- Azure DevOps
- Apache Spark, Python
- SQL proficiency
- Azure Databricks knowledge
- Big data technologies
The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL).
Out of the 7 open demands, 5 of The Azure Data Engineers should have minimum 5 years of relevant Data Engineering experience.

- Challenge technical decisions and web service designs and provide inputs for improvement
- Provide qualified code (tested and documented)
- Fix issues that arise from testing/customers
- Create or update all the relevant and required technical documents (design, architecture, )
- Follow quality development rules and recommendations (unitary test, change management, build management, Software factoryā¦)
- Perform code reviews and suggest improvements
- Work in Agile mode and Test Driven development process
- Collaborate and work closely with all stakeholders
- Hardcore developers who can write performance-based quality code and drive code
- Preferred: Java1.8, Junit, Springboot, Dropwizard, RabbitMQ, Jenkins jobs, NoSQL DB (MongoDB, Neo4j), Docker, and DevOps (Optional: Python, AWS)
- Knowledge about Secure coding guidelines, Sonarqube, Configuration Management (Perforce), Jira
- MUST have experience in Scrum/Agile SDLC
- Value Add ā experience on the server side in IPTV/OTT/STB
Responsibilities:
- Hand on experience in Golang/Python/Ruby on Rails /Node.Js
- Must have at least 1+ years of experience in Team Handling
1 Good command in either Python(django) or Python (flask)
2 Has worked on large scale
3 Experience in building REST APIs
4 Proficiency with databases such as MySQL, Oracle and MongoDB
5 knowledge of Kubernetes, docker and deployment
The key aspects of this role include:
⢠Design, build, and maintain scalable applications using Python.
⢠Contribute to the entire implementation process including driving the definition of improvements
based on business need and architectural improvements.
⢠Act as a subject matter expert for Application Software developers and Engineers.
⢠Handle server-side code for a production platform and contribute to new features.
To be the right fit, you'll need:
⢠More than 4+ years of experience as a software developer in Python, with knowledge of at least one
Python web framework such as Django, Flask, etc.
⢠Good understanding of common design patterns and architecture principles to design reliable and
Scalable applications
⢠Strong communication skills
⢠Knowledge of databases line NoSQL or MongoDB
⢠Good to have AWS and Docker or Web services
⢠Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3
Job Title: Go Developer (Remote)
Job Description:Ā Backend Engineer (GO Developer)
Ā
Remote Working | Engineering Team | Full-time
Ā
Are you passionate enough to be a crucial part of a highly analytical and scalable user engagement platform?
Are you ready learn new technologies and willing to step out of your comfort zone to explore and learn new skills?
If so, this is an opportunity for you to join a high-functioning team and make your mark on our organization!
Ā
The Impact you will create in the Job:
Ā
Build campaign generation services which can send app notifications at a speed of 10 million a minute.
Dashboards to show Real time key performance indicators to clients.
Develop complex user segmentation engines which creates segments onĀ TerabytesĀ of data within few seconds.
Leverage the power ofĀ KubernetesĀ to maintain clusters running inside VPC's across the world.
Building highly available & horizontally scalable platform services for ever growing data.
Use cloud-based services likeĀ AWS LambdaĀ for blazing fast throughput & auto scalability.
You will buildĀ backend servicesĀ andĀ APIsĀ to create scalable engineering systems.
As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
Identify and improvise areas of improvement through data insights and research.
Ā
What we look for?
3-6 years experience in developing high scale internet applications/API based services.
Worked withĀ GoLangĀ as a primary language
Experience with High scale real time architectures
Experience with queueing systems likeĀ RabbitMQ, KafkaĀ etc
Experience withĀ ElasticsearchĀ would be a plus
Having hands on experience withĀ KubernetesĀ would be a plus.
Understanding ofĀ SMTP protocolĀ would be a plus
Experience withĀ MTAĀ will be an added advantage.
Very strong analytic and problem-solving skills.
Enjoys working atĀ solving challengesĀ that come with developing real time high scale applications.
We, the Products team at DataWeave, build data products that provide timely insights that are readily consumable and actionable, at scale. Our underpinnings are: scale, impact, engagement, and visibility. We help
businesses take data driven decisions everyday. We also give them insights for long term strategy. We are focused on creating value for our customers and help them succeed.
How we work
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At
serious scale! Read more on Become a DataWeaver
What do we offer?
- Opportunity to work on some of the most compelling data products that we are building for online retailers and brands.
- Ability to see the impact of your work and the value you are adding to our customers almost immediately.
- Opportunity to work on a variety of challenging problems and technologies to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses, trainings, and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.
Roles and Responsibilities:
ā Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics
functionality
ā Build robust RESTful APIs that serve data and insights to DataWeave and other products
ā Design user interaction workflows on our products and integrating them with data APIs
ā Help stabilize and scale our existing systems. Help design the next generation systems.
ā Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
ā Work closely with the Head of Products and UX designers to understand the product vision and design
philosophy
ā Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and
interns.
ā Constantly think scale, think automation. Measure everything. Optimize proactively.
ā Be a tech thought leader. Add passion and vibrancy to the team. Push the envelope.
Skills and Requirements:
ā 5-7 years of experience building and scaling APIs and web applications.
ā Experience building and managing large scale data/analytics systems.
ā Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
ā Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
ā Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
ā Be a self-starterāsomeone who thrives in fast paced environments with minimal āmanagementā.
ā Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
ā Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
ā Use the command line like a pro. Be proficient in Git and other essential software development tools.
ā Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
ā Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
ā Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
ā Working knowledge linux server administration as well as the AWS ecosystem is desirable.
ā It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.









