
Similar jobs
ROLES AND RESPONSIBILITIES:
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
IDEAL CANDIDATE:
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
ROLES AND RESPONSIBILITIES:
We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.
KEY RESPONSIBILITIES:
- Lead and mentor a team of software engineers across backend, frontend, and integration areas.
- Drive architectural design, technical reviews, and ensure scalability and reliability.
- Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
- Establish best practices in agile development, testing automation, and CI/CD pipelines.
- Build reusable frameworks for low-code app development and AI-driven workflows.
- Hire, coach, and develop engineers to strengthen technical capabilities and team culture.
IDEAL CANDIDATE:
- B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
- 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
- Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
- Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
- Experience with Docker, Kubernetes, and CI/CD pipelines.
- Excellent communication and problem-solving skills.
PREFERRED QUALIFICATIONS:
- Experience building or scaling SaaS or platform-based products.
- Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
- Prior experience in a startup or high-growth product environment.
About the Role
We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.
You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.
This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.
Key Responsibilities
- Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
- Train and optimize models for realistic facial reenactment and temporal consistency.
- Work with GANs, VAEs, and diffusion models for video synthesis.
- Handle dataset creation, cleaning, and augmentation for face-video tasks.
- Collaborate with the AI core team to deploy trained models in production environments.
- Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.
Required Qualifications
- B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
- Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
- Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
- Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
- Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
- Good grasp of data preprocessing, model evaluation, and performance tuning.
Preferred Skills
- Prior hands-on experience with face-swap or lip-sync frameworks.
- Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
- Knowledge of multi-GPU training and model optimization.
- Familiarity with Rust / Python backend integration for inference pipelines.
What We Offer
- Work directly on production-grade AI video synthesis systems.
- Remote-first, flexible working hours.
- Mentorship from senior AI researchers and engineers.
- Opportunity to transition into a full-time role upon outstanding performance.
Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months
🛠️ Key Responsibilities
- Design, build, and maintain scalable data pipelines using Python and Apache Spark (PySpark or Scala APIs)
- Develop and optimize ETL processes for batch and real-time data ingestion
- Collaborate with data scientists, analysts, and DevOps teams to support data-driven solutions
- Ensure data quality, integrity, and governance across all stages of the data lifecycle
- Implement data validation, monitoring, and alerting mechanisms for production pipelines
- Work with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, Kafka, and Delta Lake
- Participate in code reviews, performance tuning, and documentation
🎓 Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 3–6 years of experience in data engineering with a focus on Python and Spark
- Experience with distributed computing and handling large-scale datasets (10TB+)
- Familiarity with data security, PII handling, and compliance standards is a plus
- Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
- Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
- Assemble large, complex data sets from third-party vendors to meet business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.
Requirements
- 5+ years of experience in a Data Engineer role.
- Proficiency in Linux.
- Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
- Must have experience with Python/Scala.
- Must have experience with Big Data technologies like Apache Spark.
- Must have experience with Apache Airflow.
- Experience with data pipeline and ETL tools like AWS Glue.
- Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Required Skill Set-
Project experience in any of the following - Data Management,
Database Development, Data Migration or Data Warehousing.
• Expertise in SQL, PL/SQL.
Role and Responsibilities -
• Work on a complex data management program for multi-billion dollar
customer
• Work on customer projects related to data migration, data
integration
•No Troubleshooting
• Execution of data pipelines, perform QA, project documentation for
project deliverables
• Perform data profiling, data cleansing, data analysis for migration
data
• Participate and contribute in project meeting
• Experience in data manipulation using Python preferred
• Proficient in using Excel, PowerPoint
-Perform other tasks as per project requirements.
Your responsibilities as a backend engineer will include:
- Back-end software development
- Software engineering and designing data models and write effective APIs
- Working together with engineers and product teams
- Understanding business use cases and requirements for different internal teams
- Maintenance of existing projects and New feature development
- Consume and integrate classifier/ ML snippets from Data science team
What we are looking for:
- 4+ years of industry experience with the Python and Django framework.
- Degree in Computer Science or related field
- Good analytical skills with strong fundamentals of data structures and algorithms
- Experience building backend services with hands-on experience through all stages of Agile software development life cycle.
- Ability to write optimized codes,debug programs, and integrate applications with third party tools by developing various APIs
- Experience with Databases (Relational and Non-Relational). Ex: Cassandra, MongoDB, Postgresql
- Experience with writing REST-APIs.
- Prototyping initial collection and leveraging existing tools and/or creating new tools
- Experience working different types of datasets (e.g. unstructured, semi-structured, with missing information)
- Ability to think critically and creatively in a dynamic environment, while picking up new tools and domain knowledge along the way
- A positive attitude, and a growth mindset
Bonus:
- Experience with relevant Python libraries such as Sklearn, NLTK, tensorflow, HuggingFace Transformers
- Hands on experience in Machine learning implementations
- Experience with Cloud infrastructure (e.g. AWS) and relevant microservices
- Good with Humor and Team player
We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.
We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.
Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!
Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc.
Key Responsibilities
-
Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality
-
Work with the Office of the CTO as an active member of our architecture guild
-
Writing pipelines to consume the data from multiple sources
-
Writing a data transformation layer using DBT to transform millions of data into data warehouses.
-
Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities
-
Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
What To Bring
-
5+ years of software development experience, a startup experience is a plus.
-
Past experience of working with Airflow and DBT is preferred
-
5+ years of experience working in any backend programming language.
-
Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL
-
Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
-
Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects
-
Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure
-
Basic understanding of Kubernetes & docker is a must.
-
Experience in data processing (ETL, ELT) and/or cloud-based platforms
-
Working proficiency and communication skills in verbal and written English.
- Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
- Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
- Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
- Build data pipelines that clean, transform, and aggregate data from disparate sources
- Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
- Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
- Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
- Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.
Job Qualifications:
- Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
- 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
- Technical expertise with data models, data mining.
- Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
- Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
- Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
- Hands-on knowledge in SQL and No-SQL database design.
- Having knowledge in CI/CD for the building and hosting of the solutions.
- Having AWS certification is an added advantage.
- Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
- A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists










