Cutshort logo
FlyNava Technologies logo
Data Scientist
FlyNava Technologies's logo

Data Scientist

Thahaseen Salahuddin's profile picture
Posted by Thahaseen Salahuddin
3 - 6 yrs
₹3L - ₹8L / yr
Bengaluru (Bangalore), Bengaluru (Bangalore)
Skills
Pyomo
Watson
skill iconPython
Big Data
skill iconData Science
skill iconMachine Learning (ML)
Fly NavaTechnologies is a start-up organization whose vision is to create the finest airline software for distinct competitive advantages in revenue generation and cost management. The software products have been designed and created by veterans of the airline and airline IT industry, to meet the needs of this special customer segment. The software will be constructive by innovative approach to age old practices of pricing, hedging, aircraft induction and will be path breaking to use, encouraging the users to rely and depend on its capabilities. Wewill leverage our competitive edge by incorporating new technology, big data models, operations research and predictive analytics into software products, a means of creating interest and creativity while using the software. This interest and creativity will increase potential revenues or reduce costs considerably, thereby creating a distinct competitive differentiation. ​FlyNava is convinced that when airline users create that differentiation easily, their alignment to the products will be self-motivated rather than mandated. High level of competitive advantage will also flow with the following All the products, solutions and services will be Copyright. FlyNava will benefit with high IPR value including its base thesis/research as the sole owners. Existing product companies are investing in other core areas which our business areas are predominantly manual process Solutions are based on master thesis which need 2-3 years to complete and more time to make them relevant for software development. Expertise in these areas are far and few. Responsible for Collecting, Cataloguing, Filtering of data and Benchmarking solutions - Contribute to model related Data Analytics and Reporting. - Contribute to Secured Software Release activities. Education & Experience : - B.E/B.Tech or M.Tech/MCA in Computer Science/ Information Science / Electronics & Communication - 3 - 6 Years of experience Must Have : - Strong in Data Analytics via Pyomo (for optimization) Scikit-learn (for small data ML algorithms) MLlib (Apache Spark big-data ML algorithms) - Strong in representing metrics and reports via json - Strong in scripting with Python - Familiar with Machine learning, pattern recognition Algorithms - Familiar with Software Development Life Cycle - Effective interpersonal skills Good to have : Social analytics Big data
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About FlyNava Technologies

Founded :
2015
Type :
Product
Size :
20-100
Stage :
Bootstrapped

About

Founded in 2015, FlyNava Technologies is a bootstrapped company based in Bangalore. It has 6-50 employees currently and works in the domain of IT Consultancy.
Read more

Connect with the team

Profile picture
Bhavesh Chowdary
Profile picture
Thahaseen Salahuddin
Profile picture
Shammi YK
Profile picture
Mahesh Shastry

Company social profiles

bloglinkedintwitterfacebook

Similar jobs

Non-Banking Financial Company
Non-Banking Financial Company
Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
4 - 8 yrs
₹8L - ₹13L / yr
SQL
databricks
PowerBI
Data engineering
Data architecture
+7 more

ROLES AND RESPONSIBILITIES:

We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.


Key Responsibilities-

  • Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
  • Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
  • Architect and deliver high-performance ETL/ELT processes across cloud platforms.
  • Implement and enforce data governance standards, including data quality, lineage, and access control.
  • Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
  • Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
  • Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
  • Mentor junior engineers and contribute to engineering best practices, standards, and documentation.


IDEAL CANDIDATE:

  • Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
  • Advanced SQL skills with experience handling large, complex datasets.
  • Strong expertise with Databricks for data engineering workloads.
  • Hands-on experience with major cloud platforms — AWS and Azure.
  • Deep understanding of data architecture, data modelling, and optimisation techniques.
  • Familiarity with BI and reporting environments such as Power BI.
  • Strong analytical and problem-solving abilities with a focus on data quality and governance
  • Proficiency in python or another programming language in a plus.


PERKS, BENEFITS AND WORK CULTURE:

Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.

Read more
Agentic AI Platform
Agentic AI Platform
Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
4 - 7 yrs
₹25L - ₹50L / yr
Microservices
API
Cloud Computing
skill iconJava
skill iconPython
+18 more

ROLES AND RESPONSIBILITIES:

We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.


KEY RESPONSIBILITIES:

  • Lead and mentor a team of software engineers across backend, frontend, and integration areas.
  • Drive architectural design, technical reviews, and ensure scalability and reliability.
  • Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
  • Establish best practices in agile development, testing automation, and CI/CD pipelines.
  • Build reusable frameworks for low-code app development and AI-driven workflows.
  • Hire, coach, and develop engineers to strengthen technical capabilities and team culture.


IDEAL CANDIDATE:

  • B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
  • 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
  • Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
  • Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
  • Experience with Docker, Kubernetes, and CI/CD pipelines.
  • Excellent communication and problem-solving skills.



PREFERRED QUALIFICATIONS:

  • Experience building or scaling SaaS or platform-based products.
  • Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
  • Prior experience in a startup or high-growth product environment.
Read more
Remote only
0 - 3 yrs
₹0.20000000000000004 - ₹1.2 / mo
PyTorch
TensorFlow
OpenCV
FFmpeg
skill iconDeep Learning
+11 more

About the Role

We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.

You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.

This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.


Key Responsibilities

  • Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
  • Train and optimize models for realistic facial reenactment and temporal consistency.
  • Work with GANs, VAEs, and diffusion models for video synthesis.
  • Handle dataset creation, cleaning, and augmentation for face-video tasks.
  • Collaborate with the AI core team to deploy trained models in production environments.
  • Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.

Required Qualifications

  • B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
  • Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
  • Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
  • Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
  • Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
  • Good grasp of data preprocessing, model evaluation, and performance tuning.

Preferred Skills

  • Prior hands-on experience with face-swap or lip-sync frameworks.
  • Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
  • Knowledge of multi-GPU training and model optimization.
  • Familiarity with Rust / Python backend integration for inference pipelines.

What We Offer

  • Work directly on production-grade AI video synthesis systems.
  • Remote-first, flexible working hours.
  • Mentorship from senior AI researchers and engineers.
  • Opportunity to transition into a full-time role upon outstanding performance.


Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore)
5 - 8 yrs
₹4L - ₹25L / yr
Data engineering
skill iconPython
Spark

🛠️ Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Python and Apache Spark (PySpark or Scala APIs)
  • Develop and optimize ETL processes for batch and real-time data ingestion
  • Collaborate with data scientists, analysts, and DevOps teams to support data-driven solutions
  • Ensure data quality, integrity, and governance across all stages of the data lifecycle
  • Implement data validation, monitoring, and alerting mechanisms for production pipelines
  • Work with cloud platforms (AWS, GCP, or Azure) and tools like Airflow, Kafka, and Delta Lake
  • Participate in code reviews, performance tuning, and documentation


🎓 Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 3–6 years of experience in data engineering with a focus on Python and Spark
  • Experience with distributed computing and handling large-scale datasets (10TB+)
  • Familiarity with data security, PII handling, and compliance standards is a plus


Read more
Slintel
Slintel
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
4 - 9 yrs
₹20L - ₹28L / yr
Big Data
ETL
Apache Spark
Spark
Data engineer
+5 more
Responsibilities
  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.

Requirements
  • 5+ years of experience in a Data Engineer role.
  • Proficiency in Linux.
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
  • Must have experience with Python/Scala.
  • Must have experience with Big Data technologies like Apache Spark.
  • Must have experience with Apache Airflow.
  • Experience with data pipeline and ETL tools like AWS Glue.
  • Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Digi Upaay Solutions Pvt Ltd
Sridhar Chakkravarthy
Posted by Sridhar Chakkravarthy
Remote only
8 - 11 yrs
₹11L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
PL/SQL
+4 more

Required Skill Set-

Project experience in any of the following - Data Management,

Database Development, Data Migration or Data Warehousing.

• Expertise in SQL, PL/SQL.


Role and Responsibilities -

• Work on a complex data management program for multi-billion dollar

customer

• Work on customer projects related to data migration, data

integration

•No Troubleshooting

• Execution of data pipelines, perform QA, project documentation for

project deliverables

• Perform data profiling, data cleansing, data analysis for migration

data

• Participate and contribute in project meeting

• Experience in data manipulation using Python preferred

• Proficient in using Excel, PowerPoint

 -Perform other tasks as per project requirements.

Read more
Mynd Integrated Solutions
Sushma Shishodia
Posted by Sushma Shishodia
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 6 yrs
₹5L - ₹12L / yr
skill iconDjango
skill iconFlask
skill iconPython
AWS Lambda
skill iconAmazon Web Services (AWS)
+3 more

Your responsibilities as a backend engineer will include:

  • Back-end software development
  • Software engineering and designing data models and write effective APIs
  • Working together with engineers and product teams
  • Understanding business use cases and requirements for different internal teams
  • Maintenance of existing projects and New feature development
  • Consume and integrate classifier/ ML snippets from Data science team

What we are looking for:

  • 4+ years of industry experience with the Python and Django framework.
  • Degree in Computer Science or related field
  • Good analytical skills with strong fundamentals of data structures and algorithms
  • Experience building backend services with hands-on experience through all stages of Agile software development life cycle.
  • Ability to write optimized codes,debug programs, and integrate applications with third party tools by developing various APIs
  • Experience with Databases (Relational and Non-Relational). Ex: Cassandra, MongoDB, Postgresql
  • Experience with writing REST-APIs.
  • Prototyping initial collection and leveraging existing tools and/or creating new tools
  • Experience working different types of datasets (e.g. unstructured, semi-structured, with missing information)
  • Ability to think critically and creatively in a dynamic environment, while picking up new tools and domain knowledge along the way
  • A positive attitude, and a growth mindset

 

Bonus:

  • Experience with relevant Python libraries such as Sklearn, NLTK, tensorflow, HuggingFace Transformers
  • Hands on experience in Machine learning implementations
  • Experience with Cloud infrastructure (e.g. AWS) and relevant microservices
  • Good with Humor and Team player
Read more
Velocity Services
at Velocity Services
2 recruiters
Newali Hazarika
Posted by Newali Hazarika
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
Cloud infrastructure solutions and support company. (SE1)
Cloud infrastructure solutions and support company. (SE1)
Agency job
via Multi Recruit by Ranjini A R
Pune
2 - 6 yrs
₹12L - ₹16L / yr
SQL
ETL
Data engineering
Big Data
skill iconJava
+2 more
  • Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
  • Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
  • Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
  • Build data pipelines that clean, transform, and aggregate data from disparate sources
  • Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
  • Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
  • Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
  • Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.

Job Qualifications:

  • Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
  • 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
  • Technical expertise with data models, data mining.
  • Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
  • Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
  • Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
  • Hands-on knowledge in SQL and No-SQL database design.
  • Having knowledge in CI/CD for the building and hosting of the solutions.
  • Having AWS certification is an added advantage.
  • Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
  • A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
  • Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
  • A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists
Read more
LatentView Analytics
at LatentView Analytics
3 recruiters
talent acquisition
Posted by talent acquisition
Chennai
3 - 5 yrs
₹0L / yr
Business Analysis
Analytics
skill iconPython
Looking for Immediate JoinersAt LatentView, we would expect you to:- Independently handle delivery of analytics assignments- Mentor a team of 3 - 10 people and deliver to exceed client expectations- Co-ordinate with onsite LatentView consultants to ensure high quality, on-time delivery- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)You'll be a valuable addition to our team if you have:- 3 - 5 years of hands-on experience in delivering analytics solutions- Great analytical skills, detail-oriented approach- Strong experience in R, SAS, Python, SQL, SPSS, Statistica, MATLAB or such analytic tools would be preferable- Working knowledge in MS Excel, Power Point and data visualization tools like Tableau, etc- Ability to adapt and thrive in the fast-paced environment that young companies operate in- A background in Statistics / Econometrics / Applied Math / Operations Research / MBA, or alternatively an engineering degree from a premier institution.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos