Fullstack Developer

at INSTAFUND INTERNET PRIVATE LIMITED

DP
Posted by Pruthiraj Rath
icon
Chennai
icon
1 - 3 yrs
icon
₹3L - ₹6L / yr
icon
Full time
Skills
React.js
Javascript
Python
LAMP Stack
MongoDB
NodeJS (Node.js)
Ruby on Rails (ROR)
At Daddyswallet, we’re using today’s technology to bring significant disruptive innovation to the financial industry. We focus on improving the lives of consumers by delivering simple, honest and transparent financial products.Looking for Fullstack developer having skills mainly in React native,react js.python.node js.
Read more

About INSTAFUND INTERNET PRIVATE LIMITED

Founded
Type
Size
employees
Stage
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Natural Language Processing (NLP)
PyTorch
Python
Java
Solr
Elastic Search
icon
Bengaluru (Bangalore)
icon
4 - 6 yrs
icon
₹4L - ₹20L / yr
Skill Set:
  • 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
  • Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
  • Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
  • Comfortable working with deep-learning libraries (eg. PyTorch)
  • Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
  • A Masters candidate in machine learning.
  • Can source candidates from Mu Sigma and Manthan.
Read more
Job posted by
HyreSpree Team

Data Lead

at Blue Sky Analytics

Founded 2018  •  Product  •  20-100 employees  •  Raised funding
Python
Remote sensing
Data Science
GIS analysis
icon
Remote only
icon
5 - 10 yrs
icon
₹8L - ₹25L / yr

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for a Data Lead - someone who works at the intersection of data science, GIS, and engineering. We want a leader who not only understands environmental data but someone who can quickly assemble large scale datasets that are crucial to the well being of our planet. Come save the planet with us!


Your Role

Manage: As a leadership position, this requires long term strategic thinking. You will be in charge of daily operations of the data team. This would include running team standups, planning the execution of data generation and ensuring the algorithms are put in production. You will also be the person in charge to dumb down the data science for the rest of us who do not know what it means.

Love and Live Data: You will also be taking all the responsibility of ensuring that the data we generate is accurate, clean, and is ready to use for our clients. This would entail that you understand what the market needs, calculate feasibilities and build data pipelines. You should understand the algorithms that we use or need to use and take decisions on what would serve the needs of our clients well. We also want our Data Lead to be constantly probing for newer and optimized ways of generating datasets. It would help if they were abreast of all the latest developments in the data science and environmental worlds. The Data Lead also has to be able to work with our Platform team on integrating the data on our platform and API portal.

Collaboration: We use Clubhouse to track and manage our projects across our organization - this will require you to collaborate with the team and follow up with members on a regular basis. About 50% of the work, needs to be the pulse of the platform team. You'll collaborate closely with peers from other functions—Design, Product, Marketing, Sales, and Support to name a few—on our overall product roadmap, on product launches, and on ongoing operations. You will find yourself working with the product management team to define and execute the feature roadmap. You will be expected to work closely with the CTO, reporting on daily operations and development. We don't believe in a top-down hierarchical approach and are transparent with everyone. This means honest and mutual feedback and ability to adapt.

Teaching: Not exactly in the traditional sense. You'll recruit, coach, and develop engineers while ensuring that they are regularly receiving feedback and making rapid progress on personal and professional goals.

Humble and cool: Look we will be upfront with you about one thing - our team is fairly young and is always buzzing with work. In this fast-paced setting, we are looking for someone who can stay cool, is humble, and is willing to learn. You are adaptable, can skill up fast, and are fearless at trying new methods. After all, you're in the business of saving the planet!

Requirements

  • A minimum of 5 years of industry experience.
  • Hyper-curious!
  • Exceptional at Remote Sensing Data, GIS, Data Science.
  • Must have big data & data analytics experience
  • Very good in documentation & speccing datasets
  • Experience with AWS Cloud, Linux, Infra as Code & Docker (containers) is a must
  • Coordinate with cross-functional teams (DevOPS, QA, Design etc.) on planning and execution
  • Lead, mentor and manage deliverables of a team of talented and highly motivated team of developers
  • Must have experience in building, managing, growing & hiring data teams. Has built large-scale datasets from scratch
  • Managing work on team's Clubhouse & follows up with the team. ~ 50% of work, needs to be the pulse of the platform team
  • Exceptional communication skills & ability to abstract away problems & build systems. Should be able to explain to the management anything & everything
  • Quality control - you'll be responsible for maintaining a high quality bar for everything your team ships. This includes documentation and data quality
  • Experience of having led smaller teams, would be a plus.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Read more
Job posted by
Balahun Khonglanoh

Data Analyst

at Srijan Technologies

Founded 2002  •  Products & Services  •  100-1000 employees  •  Profitable
Data Analytics
Data modeling
Python
PySpark
ETL
SQL
Axure
Amazon Web Services (AWS)
icon
Remote only
icon
3 - 8 yrs
icon
₹5L - ₹12L / yr

Role Description:

  • You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
  • You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
  • You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
  • You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
  • In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
  • You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
  • Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.

Job Requirement:

  • Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
  • 3+ years of professional work experience with a reputed analytics firm
  • Expertise in handling large amount of data through Python or PySpark
  • Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
  • Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
  • Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
  • A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
  • Strong problem-solving, requirement gathering, and leading.
  • Track record of completing projects successfully on time, within budget and as per scope

Read more
Job posted by
PriyaSaini

Machine Learning Engineer

at Aikon Labs Private Limited

Founded 2009  •  Product  •  20-100 employees  •  Bootstrapped
Natural Language Processing (NLP)
Machine Learning (ML)
Data Structures
Algorithms
Deep Learning
Python
Scikit-Learn
Flask
RESTful APIs
icon
Pune
icon
0 - 5 yrs
icon
₹1L - ₹8L / yr
About us
Aikon Labs Pvt Ltd is a start-up focused on Realizing Ideas. One such idea is iEngage.io , our Intelligent Engagement Platform. We leverage Augmented Intelligence, a combination of machine-driven insights & human understanding, to serve a timely response to every interaction from the people you care about.
Get in touch If you are interested. 

Do you have a passion to be a part of an innovative startup? Here’s an opportunity for you - become an active member of our core platform development team.

Main Duties
● Quickly research the latest innovations in Machine Learning, especially with respect to
Natural Language Understanding & implement them if useful
● Train models to provide different insights, mainly from text but also other media such as Audio and Video
● Validate the models trained. Fine-tune & optimise as necessary
● Deploy validated models, wrapped in a Flask server as a REST API or containerize in docker containers
● Build preprocessing pipelines for the models that are bieng served as a REST API
● Periodically, test & validate models in use. Update where necessary

Role & Relationships
We consider ourselves a team & you will be a valuable part of it. You could be reporting to a Senior member or directly to our Founder, CEO

Educational Qualifications
We don’t discriminate. As long as you have the required skill set & the right attitude

Experience
Upto two years of experience, preferably working on ML. Freshers are welcome too!

Skills
Good
● Strong understanding of Java / Python
● Clarity on concepts of Data Science
● A strong grounding in core Machine Learning
● Ability to wrangle & manipulate data into a processable form
● Knowledge of web technologies like Web server (Flask, Django etc), REST API's
Even better
● Experience with deep learning
● Experience with frameworks like Scikit-Learn, Tensorflow, Pytorch, Keras
Competencies
● Knowledge of NLP libraries such as NLTK, spacy, gensim.
● Knowledge of NLP models such as Wod2vec, Glove, ELMO, Fasttext
● An aptitude to solve problems & learn something new
● Highly self-motivated
● Analytical frame of mind
● Ability to work in fast-paced, dynamic environment

Location
Pune

Remuneration
Once we meet, we shall make an offer depending on how good a fit you are & the experience you already have
Read more
Job posted by
Shankar K

(Sr) Data Engineer, Delivery

at Dataweave Pvt Ltd

Founded 2011  •  Products & Services  •  100-1000 employees  •  Raised funding
Python
Data Structures
Algorithms
Web Scraping
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
Best in industry
Relevant set of skills
● Good communication and collaboration skills with 4-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets .
● Knowledge of building Web Scraping and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra ( data stores )
would be a plus.
● Expert in Python programming
Role and responsibilities
● Inclined towards working in a start-up environment.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Design and Build robust and scalable data engineering solutions for structured and unstructured data for
delivering business insights, reporting and analytics.
● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall
system performance.
● Build robust API ’s that powers our delivery points (Dashboards, Visualizations and other integrations).
Read more
Job posted by
Megha M

Principal Data Scientist

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
Deep Learning
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Python
Machine Learning (ML)
Hadoop
Computer Vision
Predictive modelling
Predictive analytics
TensorFlow
PyTorch
icon
Pune
icon
8 - 10 yrs
icon
₹24L - ₹28L / yr

Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

As a Principal Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,

from developing new models using pre-existing components to enable current systems to

be more intelligent. You should be able to train models using existing data and use them in

most creative manner to deliver the smartest experience to customers. You will have to

develop multiple AI applications that push the threshold of intelligence in machines.


Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.






Your bucket of Undertakings :

  1. Collaborate with team members to develop new models to be used for classification problems
  2. Work on software profiling, performance tuning and analysis, and other general software engineering tasks
  3. Use independent judgment to take existing data and build new models from it
  4. Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
  5. Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
  6. Lead AI R&D initiatives to include prototypes and minimum viable products
  7. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  8. Build reusable and scalable solutions for use across the customer base
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  11. Identify new business opportunities and prioritize pursuits for AI 
  12. Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals

Education & Experience : 

  1. BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 8+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Research and implement novel machine learning and statistical approaches
  7. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  8. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team

Accomplishment Set: 


  1. Extensive experience with Hadoop and Machine learning algorithms
  2. Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
  3. Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
  4. Passion for solving NLP problems
  5. Experience with specialized tools and project for working with natural language processing
  6. Knowledge of machine learning frameworks like Tensorflow, Pytorch
  7. Experience with software version control systems like Github
  8. Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills
Read more
Job posted by
Mishita Juneja

Data Engineer with Expertise in ADF

at Numantra Technologies

Founded 2003  •  Products & Services  •  100-1000 employees  •  Profitable
ADF
PySpark
Jupyter Notebook
Big Data
Windows Azure
Python
Data Structures
Data engineering
icon
Remote, Mumbai, powai
icon
2 - 12 yrs
icon
₹8L - ₹18L / yr
      • Data pre-processing, data transformation, data analysis, and feature engineering
      • Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
    • Required skills:
      • Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
      • Fluency in Python (Pandas), PySpark, SQL, or similar
      • Azure data factory experience (min 12 months)
      • Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
      • Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
      • Ability to work independently with demonstrated experience in project or program management
      • Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
 
Read more
Job posted by
nisha mattas

Data Engineer

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
ETL
Big Data
Hadoop
PySpark
SQL
Python
Spark
Microsoft Windows Azure
Datawarehousing
icon
Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹8L - ₹14L / yr
Roles and Responsibilities:

• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Job posted by
Sudarshini K

Data Engineer

at Product / Internet / Media Companies

Agency job
via archelons
Big Data
Hadoop
Data processing
Python
Data engineering
HDFS
Spark
Data lake
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹30L / yr

REQUIREMENT:

  •  Previous experience of working in large scale data engineering
  •  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
  •  Previous experience of architecting and designing backend for large scale data processing.
  •  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
  •  Hands-on and have the ability to contribute a key portion of data engineering backend.
  •  Self-inspired and motivated to drive for exceptional results.
  •  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
  •  Familiarity and experience working with different DB technologies and how to scale them.

RESPONSIBILITY:

  •  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
  •  Build data engineering workflow for large scale data processing.
  •  Discover opportunities in data acquisition.
  •  Bring industry best practices for data engineering workflow.
  •  Develop data set processes for data modelling, mining and production.
  •  Take additional tech responsibilities for driving an initiative to completion
  •  Recommend ways to improve data reliability, efficiency and quality
  •  Goes out of their way to reduce complexity.
  •  Humble and outgoing - engineering cheerleaders.
Read more
Job posted by
Meenu Singh

Data Engineer

at Codalyze Technologies

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
Hadoop
Big Data
Scala
Spark
Amazon Web Services (AWS)
Java
Python
Apache Hive
icon
Mumbai
icon
3 - 7 yrs
icon
₹7L - ₹20L / yr
Job Overview :

Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.

Responsibilities and Duties :

- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.

- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions

Education level :

- Bachelor's degree in Computer Science or equivalent

Experience :

- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development

- Expertise in application, data and infrastructure architecture disciplines

- Expert designing data integrations using ETL and other data integration patterns

- Advanced knowledge of architecture, design and business processes

Proficiency in :

- Modern programming languages like Java, Python, Scala

- Big Data technologies Hadoop, Spark, HIVE, Kafka

- Writing decently optimized SQL queries

- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)

- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions

- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.

- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.

- Experience generating physical data models and the associated DDL from logical data models.

- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.

- Experience enforcing data modeling standards and procedures.

- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.

- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals

Skills :

Must Know :

- Core big-data concepts

- Spark - PySpark/Scala

- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)

- Handling of various file formats

- Cloud platform - AWS/Azure/GCP

- Orchestration tool - Airflow
Read more
Job posted by
Aishwarya Hire
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at INSTAFUND INTERNET PRIVATE LIMITED?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort