Scala Jobs

Explore top Scala Job opportunities from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon
icon
Pune
icon
12 - 16 yrs
icon
₹22L - ₹35L / yr
Java
NodeJS (Node.js)
Scala
Javascript
MVC Framework
+5 more

Qualification : BE / BTech / MCA/ ME / MTech


Required Skils

● Strong experience in architecture of distributed cloud based systems using Java, Scala, Angular, Node Js like technologies. 

● Strong understanding of large scale distributed architectures, microservices architecture, reactive programming paradigms, design patterns, information architecture, application development processes and practices

 

● Knowledge of learning domain is added advantage

● Experience in working with Open Source software is desirable

● Must be very good in java, javascript related technologies

● Experience on one or more Javascript frameworks such as JQuery, Twitter Bootstrap, backbone, Angular and others.

● Should have strong understanding of transactional databases, and of multiple types of NoSQL databases like Cassandra, Elastic Search, Neo4j, and others

● Understanding of AKKA, Play framework

● Should have good DevOps working knowledge on Cloud infrastructure

● Experience in TDD/BDD is required

● Knowledge of Kafka, Azure/Google Cloud is added advantage

Good understanding of Software as a Service model preferred.

Read more
icon
Mumbai
icon
2 - 3 yrs
icon
₹8L - ₹12L / yr
Java
C++
Scala
Spark

LogiNext is looking for a technically savvy and passionate Software Engineer - Data Science to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.

In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.

Your goal will be to help our company analyze trends to make better decisions. Without knowledge of how the software works, data scientists might have difficulty in work. Apart from experience in developing R and Python, they must know modern approaches to software development and their impact. DevOps continuous integration and deployment, experience in cloud computing are everyday skills to manage and process data.

Responsibilities:

Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams


Requirements:

Bachelors degree or higher in Computer Science, Information Technology, Information Systems, Statistics, Mathematics, Commerce, Engineering, Business Management, Marketing or related field from top-tier school 2 to 3 year experince in in data mining, data modeling, and reporting. Understading of SaaS based products and services. Understanding of machine-learning and operations research Experience of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen and problem-solving aptitude Excellent communication and presentation skills Proficiency in Excel for data management and manipulation Experience in statistical modeling techniques and data wrangling Able to work independently and set goals keeping business objectives in mind

Read more
icon
Mumbai
icon
4 - 7 yrs
icon
₹12L - ₹19L / yr
Machine Learning (ML)
Data Science
PHP
Java
Spark
+1 more

LogiNext is looking for a technically savvy and passionate Senior Software Engineer - Data Science to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.

In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.

Your goal will be to help our company analyze trends to make better decisions. Without knowledge of how the software works, data scientists might have difficulty in work. Apart from experience in developing R and Python, they must know modern approaches to software development and their impact. DevOps continuous integration and deployment, experience in cloud computing are everyday skills to manage and process data.

Responsibilities :

Adapting and enhancing machine learning techniques based on physical intuition about the domain Design sampling methodology, prepare data, including data cleaning, univariate analysis, missing value imputation, , identify appropriate analytic and statistical methodology, develop predictive models and document process and results Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule and on budget Coordinate and lead efforts to innovate by deriving insights from heterogeneous sets of data generated by our suite of Aerospace products Support and mentor data scientists Maintain and work with our data pipeline that transfers and processes several terabytes of data using Spark, Scala, Python, Apache Kafka, Pig/Hive & Impala Work directly with application teams/partners (internal clients such as Xbox, Skype, Office) to understand their offerings/domain and help them become successful with data so they can run controlled experiments (a/b testing) Understand the data generated by experiments, and producing actionable, trustworthy conclusions from them Apply data analysis, data mining and data processing to present data clearly and develop experiments (ab testing) Work with development team to build tools for data logging and repeatable data tasks tol accelerate and automate data scientist duties


Requirements:

Bachelor’s or Master’s degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field. PhD preferred 4 to 7 years of experience in data mining, data modeling, and reporting 3+ years of experience working with large data sets or do large scale quantitative analysis Expert SQL scripting required Development experience in one of the following: Scala, Java, Python, Perl, PHP, C++ or C# Experience working with Hadoop, Pig/Hive, Spark, MapReduce Ability to drive projects Basic understanding of statistics – hypothesis testing, p-values, confidence intervals, regression, classification, and optimization are core lingo Analysis - Should be able to perform Exploratory Data Analysis and get actionable insights from the data, with impressive visualization. Modeling - Should be familiar with ML concepts and algorithms; understanding of the internals and pros/cons of models is required. Strong algorithmic problem-solving skills Experience manipulating large data sets through statistical software (ex. R, SAS) or other methods Superior verbal, visual and written communication skills to educate and work with cross functional teams on controlled experiments Experimentation design or A/B testing experience is preferred. Experince in team management.

Read more

Tata digital

Agency job
via Seven N Half by Gurpreet Desai
icon
Gurugram
icon
7 - 10 yrs
icon
₹15L - ₹30L / yr
azure
SQL
Data engineering
Azure data bricks
data factory
+8 more

 

Data Engineer

-          Big data development experience – Kafka, Hadoop, DataBricks

-          Experience building data pipelines using Spark and/or Hive.

-          Strong knowledge in Scala/Java/Python

-          Advanced proficiency in SQL, NoSQL

-          strong in Database and Data Warehousing concepts

-          Expertise in SQL, SQL tuning, schema design, Python and ETL processes

-          Experience in DataOps, MLOps on Databricks or similar platform

-          Experience with any of the Cloud Technologies (GCP, Azure,AWS)

-          Highly Motivated, Self-starter and quick learner

-          Proficiency in Statistical procedures, Experiments and Machine Learning techniques

-          Must have knowledge on basics of data analytics and data modelling

-          Excellent written and verbal communication skills

 

-       Active involvement in building of enterprise scale intelligence solution

-       Design new processes and builds large, complex data sets

-       Conducts statistical modeling and experiment design

-       Tests and validates predictive models.

-       Build web prototypes and performs data visualization

-       Generate algorithms and create computer models

-       Should possess excellent analytical skills and troubleshooting ideas

-       Should be aware for Agile Mode of operations and should have been part of scrum teams.

-       Should be open to work in Devops model with responsibilities of Dev and Support both as application goes live

-       Should be able to work in shifts (if required)

-       Should be open to work in fast paced, project with multiple stakeholders.

Read more

QUT

Agency job
via Hiringhut Solutions Pvt Ltd by Neha Bhattarai
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹7L - ₹10L / yr
Java
J2EE
Spring Boot
Hibernate (Java)
Apache Kafka
+5 more
What You'll Do

•Design and develop distributed, scalable, high availability web services.
•Work independently completing small to Mid-sized projects while
managing competing priorities in a demanding production environment.
•you will be writing reusable and maintainable quality code.

What You'll Bring

•BS in CS (or equivalent) and 4+ years of hands-on software design and
development experience in building high-availability, scalable backend
systems.
•hands-on coding experience is a must.
•Expertise in working on Java technology stacks in Linux environment -
Java, Spring/ Hibernate, MVC frameworks, TestNG, JUnit.
•Expertise in Database Schema Design, performance efficiency, and SQL
working on leading RDBMS such as MySQL, Oracle, MSSQL, etc.
•Expertise in OOAP, Restful Web Services, and building scalable systems
Preferred Qualifications:
•Experience using Platforms such as Drools, Solr, Memcached, AKKA, Scala,
Kafka etc. is a plus
•Participation in and Contributions to Open-Source Software Development.
Read more

Diggibyte Technologies

Agency job
via KUKULKAN by Pragathi P
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹10L - ₹15L / yr
Scala
Spark
Python
Microsoft Windows Azure
SQL Azure

Hiring For Data Engineer - Bangalore (Novel Tech Park)

Salary : Max upto 15LPA

Experience : 3-5years

  • We are looking for an experienced (3-5 years) Data Engineers to join our team in Bangalore.
  • Someone who can help client to build scalable, reliable, and secure Data analytic solutions.

 

Technologies you will get to work with:

 

1.Azure Data-bricks

2.Azure Data factory

3.Azure DevOps

4.Spark with Python & Scala and Airflow scheduling.

 

What You will Do: -

 

* Build large-scale batch and real-time data pipelines with data processing frameworks like spark, Scala on Azure platform.

* Collaborate with other software engineers, ML engineers and stakeholders, taking learning and leadership opportunities that will arise every single day.

* Use best practices in continuous integration and delivery.

* Sharing technical knowledge with other members of the Data Engineering Team.

* Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.

* You will get to work with massive data sets and learn to apply the latest big data technologies on a leading-edge platform.

 

Job Functions:

Information Technology Employment

Type - Full-time

Who can apply -Seniority Level / Mid / Entry level

Read more

QUT

Agency job
via Hiringhut Solutions Pvt Ltd by Neha Bhattarai
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹1L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
What You'll Bring

•3+ years of experience in big data & data warehousing technologies
•Experience in processing and organizing large data sets
•Experience with big data tool sets such Airflow and Oozie

•Experience working with BigQuery, Snowflake or MPP, Kafka, Azure, GCP and AWS
•Experience developing in programming languages such as SQL, Python, Java or Scala
•Experience in pulling data from variety of databases systems like SQL Server, maria DB, Cassandra
NOSQL databases
•Experience working with retail, advertising or media data at large scale
•Experience working with data science engineering, advanced data insights development
•Strong quality proponent and thrives to impress with his/her work
•Strong problem-solving skills and ability to navigate complicated database relationships
•Good written and verbal communication skills , Demonstrated ability to work with product
management and/or business users to understand their needs.
Read more
DP
Posted by Nelson Xavier
icon
Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 8 yrs
icon
₹10L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

Job responsibilities

- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges

- You will pair to write clean and iterative code based on TDD

- Leverage various continuous delivery practices to deploy, support and operate data pipelines

- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

- Create data models and speak to the tradeoffs of different modeling approaches

- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

- Encouraging open communication and advocating for shared outcomes

 

Technical skills

- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop

- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems

- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

 



Professional skills

- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

- An interest in coaching, sharing your experience and knowledge with teammates

- You enjoy influencing others and always advocate for technical excellence while being open to change when needed

- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more

Read more
DP
Posted by Saravanan K
icon
Remote only
icon
6 - 9 yrs
icon
₹20L - ₹35L / yr
Vue.js
AngularJS (1.x)
Angular (2+)
React.js
Javascript
+6 more

Senior Fullstack Developer Remote Opportunity

 

https://6sense.com/" target="_blank">https://6sense.com/

 

https://www.slintel.com/" target="_blank">https://www.slintel.com/


Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are

funded by top tier VCs, and going after a billion-dollar opportunity. At Slintel, we are building a

sales development automation platform that can significantly improve outcomes for sales teams,

while reducing the number of hours spent on research and outreach.



We are a big data company and perform deep analysis on technology buying patterns, buyer

pain points to understand where buyers are in their journey. Over 100 billion data points are

analyzed every week to derive recommendations on where companies should focus their

marketing and sales efforts on. Third-party intent signals are then clubbed with first-party data

from CRMs to derive meaningful recommendations on whom to target on any given day.



●Experience: 5 years of experience in tech-based companies and preferably start-ups.

●Hands-on experience and deep understanding of working with large-scale datasets

●Experience in one of the language like Java, Python, and Scala is preferred.

● Working knowledge of any front-end technologies, such as react or angular

●Ability to work with complex business flows and deal with huge amounts of data

●Experience in building microservices and distributed systems preferred

Our dual missions — one for the world, one for us • For the world: Improve transparency and

trust in the B2B ecosystem

 For ourselves: Lead fulfilling, impactful lives. Our core values (how we act)

• Have Empathy

• Continuously push the barrier

• Make data-driven decisions

• Take smart risks

• Have fun at work


• We’ll pay you handsomely! This is a full-time, salaried position with statutory benefits like PF

& Gratuity.

 

• We’ll invest in your physical and mental well-being with competitive medical insurance

benefits and top-up options, regular emotional wellbeing sessions and other fun activities!

 

• We’ll encourage you to take regular vacations ... seriously.

 

• We promise to support you in every way possible by offering flexible leave options and other

benefits with respect to Covid because for us Employees are always the first priority!

 



Read more

Opportunity with top conglomerate

Agency job
via Seven N Half by Gurpreet Desai
icon
Mumbai, Gurugram
icon
3 - 9 yrs
icon
₹5L - ₹30L / yr
Scala
SQL
Spark
Hadoop
Big Data
+6 more
Functional / Technical Skills:

- Big data development experience – Kafka, Hadoop
- Experience building data pipelines using Spark and/or Hive.
- Strong knowledge of Python
- Advanced proficiency in Scala, SQL, NoSQL
- strong in Database and Data Warehousing concepts
- Expertise in SQL, SQL tuning, schema design, Python and ETL processes
- Experience with Cloud Technologies required Azure, Data modeling,
Azure Databricks, Azure Data factory )
- Experience in working with Azure Data Lake, and Stream Analytics.
- Highly Motivated, Self-starter and quick learner
- Proficiency in Statistical procedures, Experiments and Machine Learning
techniques
- Must know the basics of data analytics and data modeling
- Excellent written and verbal communication skills

Roles/Responsibilities:

- Active involvement in the building of a recommendation engine
- Design new processes and builds large, complex data sets
- Conducts statistical modeling and experiment design
- Tests and validates predictive models.
- Build web prototypes and performs data visualization
- Generate algorithms and create computer models
- Should possess excellent analytical skills and troubleshooting ideas
- Should be aware of Agile Mode of operations and should have been
part of scrum teams.
- Should be open to work in the DevOps model with responsibilities of Dev
and Support both as the application goes live
- Should be able to work in shifts (if required)
- Should be open to working on fast-paced, projects with multiple
stakeholders.
Read more
icon
Pune, Chennai
icon
5 - 9 yrs
icon
₹15L - ₹20L / yr
Scala
PySpark
Spark
SQL Azure
Hadoop
+4 more
  • 5+ years of experience in a Data Engineering role on cloud environment
  • Must have good experience in Scala/PySpark (preferably on data-bricks environment)
  • Extensive experience with Transact-SQL.
  • Experience in Data-bricks/Spark.
  • Strong experience in Dataware house projects
  • Expertise in database development projects with ETL processes.
  • Manage and maintain data engineering pipelines
  • Develop batch processing, streaming and integration solutions
  • Experienced in building and operationalizing large-scale enterprise data solutions and applications
  • Using one or more of Azure data and analytics services in combination with custom solutions
  • Azure Data Lake, Azure SQL DW (Synapse), and SQL Database products or equivalent products from other cloud services providers
  • In-depth understanding of data management (e. g. permissions, security, and monitoring).
  • Cloud repositories for e.g. Azure GitHub, Git
  • Experience in an agile environment (Prefer Azure DevOps).

Good to have

  • Manage source data access security
  • Automate Azure Data Factory pipelines
  • Continuous Integration/Continuous deployment (CICD) pipelines, Source Repositories
  • Experience in implementing and maintaining CICD pipelines
  • Power BI understanding, Delta Lake house architecture
  • Knowledge of software development best practices.
  • Excellent analytical and organization skills.
  • Effective working in a team as well as working independently.
  • Strong written and verbal communication skills.
  • Expertise in database development projects and ETL processes.
Read more
DP
Posted by phani kalyan
icon
Pune
icon
9 - 14 yrs
icon
₹20L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Job Id: SG0601

Hi,

Enterprise Minds is looking for Data Architect for Pune Location.

Req Skills:
Python,Pyspark,Hadoop,Java,Scala
Read more

Tier 1 MNC

Agency job
icon
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
icon
3 - 12 yrs
icon
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more

Sopra Steria

Agency job
via Mount Talent Consulting by Himani Jain
icon
Chennai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
5 - 8 yrs
icon
₹2L - ₹12L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
Good hands-on experience on Spark and Scala.
Should have experience in Big Data, Hadoop.
Currently providing WFH.
immediate joiner or 30 days
Read more

This company provides on-demand cloud computing platforms.

Agency job
via New Era India by Niharica Singh
icon
Remote, Pune, Mumbai, Bengaluru (Bangalore), Gurugram, Hyderabad
icon
15 - 25 yrs
icon
₹35L - ₹55L / yr
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
Architecture
Python
+5 more
  • 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
  • 15+ years of experience as a technical specialist in Customer-facing roles.
  • Ability to travel to client locations as needed (25-50%)
  • Extensive experience architecting, designing and programming applications in an AWS Cloud environment
  • Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
  • Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
  • Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
  • Agile software development expert
  • Experience with continuous integration tools (e.g. Jenkins)
  • Hands-on familiarity with CloudFormation
  • Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
  • Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
  • Strong practical application development experience on Linux and Windows-based systems
  • Extra curricula software development passion (e.g. active open source contributor)
Read more
icon
Remote, Bengaluru (Bangalore), Pune, Chennai
icon
3 - 25 yrs
icon
₹12L - ₹80L / yr
Java
Javascript
React.js
Angular (2+)
AngularJS (1.x)
+5 more

Job Description

 

A top of the line, premium software advisory & development services firm. Our customers include promising early stage start ups, fortune 500 enterprises and investors. We draw inspiration from Leonardo Da Vinci's famous quote - Simplicity is the ultimate sophistication.

Domains we work in

Multiple; publishing, retail, banking, networking, social sector, education and many more.

Tech we use

Java, Scala, Golang, Elixir, Python, RoR, .Net, JS frameworks

More details on tech

You name it and we might be working on it. The important thing is not technology here but what kind of solutions we provide to our clients. We believe to solve some of the most complex problems, holistic thinking and solution design is of extreme importance. Technology is the most important tool to implement the solution thus designed.

Skills & Requirements

Who should join us

We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.

So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project.

What would you do here

* Work on complex, custom-designed, scalable, multi-tiered software development projects

* Work closely with clients (commercial & social enterprises, start ups), both Business and Technical staff members * Be responsible for the quality of software and resolving any issues regards the solution

* Think through hard problems, not limited to technology and work with a team to realise and implement solutions

* Learn something new everyday

Below are key skills expected

* Development and delivery experience in any of the programming languages

* Passion for software engineering and craftsman-like coding prowess

* Great design and solutioning skills (OO & Functional)

* Experience including analysis, design, coding and implementation of large scale custom built object-oriented applications

* Understanding of code refactoring and optimisation issues

* Understanding of Virtualisation & DevOps. Experience with Ansible, Chef, Docker preferable * Ability to learn new technologies and adapt to different situations

* Ability to handle ambiguity on a day to day basis

Read more
DP
Posted by Gaurav Gaglani
icon
Remote only
icon
3 - 7 yrs
icon
₹10L - ₹70L / yr
Scala
Akka
GraphQL
RESTful APIs

TeamExtn is looking for a passionate Senior Scala Engineer. It will be expected from you to build pragmatic solutions on mission-critical initiatives. If you know your stuff, see the beauty in code, have knowledge in depth and breadth, advocate best practices, and love to work with distributed systems, then this is an ideal position for you.

As a core member of our Special Projects team, you will work on various new projects in a startup-like environment. These projects may include such things as building new APIs (REST/GraphQL/gRPC) for new products, integrating new products with core Carvana services, building highly scalable back end processing systems in Scala and integrating with systems backed by Machine Learning. You will use cutting edge functional Scala libraries such as ZIO. You will have the opportunity to work closely with our Product, Experience and Software Engineering teams to deliver impact.

Responsibilities:

  • Build highly scalable APIs and back end processing systems for new products
  • Contribute in the full software development lifecycle from design and development to testing and operating in production
  • Communicate effectively with engineers, product managers and data scientists
  • Drive scalability and performance within our distributed AI platform
  • Full software development lifecycle from design and development to testing and operating in production
  • Communicate effectively with engineers, product managers and data scientists

Skills And Experience:

  • 4+ years experience with Scala or other functional language
  • Experience with Akka and Lightbend stack
  • Expert with PostgreSQL, MySQL or MS SQL
  • Experience in architecting, developing, deploying and operating large scale distributed systems and actor systems
  • Experience with cloud APIs (e.g., GCP, AWS, Azure)
  • Messaging systems such as GCP Pub/Sub, RabbitMQ, Kafka
  • Strong foundation in algorithms and data structures and their real-world use cases.
  • Solid understanding of computer systems and networks
  • Production quality coding standards and patterns

 

BONUS SKILLS:

  • Experience with functional programming in Scala
  • Knowledge of ZIO and related ecosystem
  • Experience with functional database libraries in Scala (Quill preferred)
  •  


Our Interview Process:
Our interview involves 3 steps.

1. Code Evaluation - In this step we send you a simple assignment that you implement

2. First Technical Round - In this technical round we evaulate you on your skills, concepts and knowledge in Scala.

3. Second Technical Round - If the first round goes well we check how hands you are with Scala and have online problem solving session.

Read more

Top Management Consulting Company

Agency job
icon
Bengaluru (Bangalore), Gurugram
icon
7 - 12 yrs
icon
₹14L - ₹28L / yr
Java
Python
Scala
C#
.NET
+12 more
Requisites:
  • Front-End: Javascript/Html/CSS, Any front-end framework
  • Backend: Java/Scala/Python/C#.net
  • Bigdata experience is a plus
  • Cloud: AWS/Azure/GCP
  • Databases: SQL databases; NoSQL databases
  • Location: Bangalore/Gurgaon
  • Experience 7-12 years
Read more

Product based company

Agency job
via Zyvka Global Services by Ridhima Sharma
icon
Bengaluru (Bangalore)
icon
3 - 12 yrs
icon
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
DP
Posted by Puja Kumari
icon
Remote only
icon
2 - 6 yrs
icon
₹6L - ₹20L / yr
Apache Hive
Spark
Scala
PySpark
Data engineering
+4 more
We are looking for big data engineers to join our transformational consulting team serving one of our top US clients in the financial sector. You'd get an opportunity to develop big data pipelines and convert business requirements to production grade services and products. With
lesser concentration on enforcing how to do a particular task, we believe in giving people the opportunity to think out of the box and come up with their own innovative solution to problem solving.
You will primarily be developing, managing and executing handling multiple prospect campaigns as part of Prospect Marketing Journey to ensure best conversion rates and retention rates. Below are the roles, responsibilities and skillsets we are looking for and if you feel these resonate with you, please get in touch with us by applying to this role.
Roles and Responsibilities:
• You'd be responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies.
• You'd collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.
• You'd Assist in the definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with multiple cross-functional teams.
• Assist in the design and implementation process for new products, research and create POC for possible solutions.
Skillset:
• Bachelors or Masters Degree in a technology related field preferred.
• Overall experience of 2-3 years on the Big Data Technologies.
• Hands on experience with Spark (Java/ Scala)
• Hands on experience with Hive, Shell Scripting
• Knowledge on Hbase, Elastic Search
• Development experience In Java/ Python is preferred
• Familiar with profiling, code coverage, logging, common IDE’s and other
development tools.
• Demonstrated verbal and written communication skills, and ability to interface with Business, Analytics and IT organizations.
• Ability to work effectively in short-cycle, team oriented environment, managing multiple priorities and tasks.
• Ability to identify non-obvious solutions to complex problems
Read more
icon
Pune, Bengaluru (Bangalore), Coimbatore, Hyderabad, Gurugram
icon
3 - 10 yrs
icon
₹18L - ₹40L / yr
Apache Kafka
Spark
Hadoop
Apache Hive
Big Data
+5 more

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.



You’ll spend time on the following:

  • You will partner with teammates to create complex data processing pipelines in order to solve our clients’ most ambitious challenges
  • You will collaborate with Data Scientists in order to design scalable implementations of their models
  • You will pair to write clean and iterative code based on TDD
  • Leverage various continuous delivery practices to deploy data pipelines
  • Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
  • Develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
  • Create data models and speak to the tradeoffs of different modeling approaches

Here’s what we’re looking for:

 

  • You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
  • You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
  • Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
  • You are comfortable taking data-driven approaches and applying data security strategy to solve business problems 
  • Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
  • Strong communication and client-facing skills with the ability to work in a consulting environment
Read more
DP
Posted by Genevieve Mascarenhas
icon
Remote only
icon
5 - 8 yrs
icon
₹10L - ₹30L / yr
webpack
Gulp
Jasmine (Javascript Testing Framework)
NOSQL Databases
SQL
+8 more
Senior Fullstack Engineer :
Role on paper : Senior Software Engineer.
Must have technical expertise :
  • 5-8 years experience working with Angular, NodeJS, Graphql
  • Experience with building Progressive and cross platform web applications.
  • Experience with UI Build tools like Webpack, npm, Gulp, Bower etc.
  • Experience with Unit testing frameworks like Jasmine, MochaJS or Jest
  • Knowledge of cross browser compatibility, browser rendering behaviour and performance.
  • 2-3 years experience working extensively with at least one database (sql, nosql, stream or graph databases)
  • Should have experience in Containerizing and deploying applications using Docker and Kubernetes and writing Microservices in general.
Good to have :
  • Good to have experience working with Scala
  • Should have experience working with GCP
  • Has some experience writing High level and Low Level Architecture designs to implement end to end features.
  • Experience with devops tools like : Jenkins, Gitlab CI/CD
Example projects :
  • Building an API Microservices to support ElasticSearch responses
Read more
DP
Posted by Shiva V
icon
Remote, Hyderabad
icon
4 - 6 yrs
icon
₹15L - ₹20L / yr
Python
PySpark
Spark
Scala
Microsoft Azure Data factory
Should have good experience with Python or Scala/PySpark/Spark/
• Experience with Advanced SQL
• Experience with Azure data factory, data bricks,
• Experience with Azure IOT, Cosmos DB, BLOB Storage
• API management, FHIR API development,
• Proficient with Git and CI/CD best practices
• Experience working with Snowflake is a plus
Read more
icon
Remote only
icon
9 - 20 yrs
icon
Best in industry
OLTP
data ops
cloud data
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
+6 more

THE ROLE:Sr. Cloud Data Infrastructure Engineer

As a Sr. Cloud Data Infrastructure Engineer with Intuitive, you will be responsible for building or converting legacy data pipelines from legacy environments to modern cloud environments to help the analytics and data science initiatives across our enterprise customers. You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud Data Infrastructure Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud Data Infrastructure Engineer is the main point of contact for all things related to DataLake formation and data at scale. In this role, we expect our DataOps leaders to be obsessed with data and providing insights to help our end customers.

ROLES & RESPONSIBILITIES:

  • Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built
  • Developing scalable and re-usable frameworks for ingesting large data from multiple sources.
  • Modern Data Orchestration engineering - query tuning, performance tuning, troubleshooting, and debugging big data solutions.
  • Provides technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources.
  • Deep understanding of ETL/ELT design methodologies, patterns, personas, strategy, and tactics for complex data transformations.
  • Data processing/transformation using various technologies such as spark and cloud Services.
  • Understand current data engineering pipelines using legacy SAS tools and convert to modern pipelines.

 

Data Infrastructure Engineer Strategy Objectives: End to End Strategy

Define how data is acquired, stored, processed, distributed, and consumed.
Collaboration and Shared responsibility across disciplines as partners in delivery for progressing our maturity model in the End-to-End Data practice.

  • Understanding and experience with modern cloud data orchestration and engineering for one or more of the following cloud providers - AWS, Azure, GCP.
  • Leading multiple engagements to design and develop data logistic patterns to support data solutions using data modeling techniques (such as file based, normalized or denormalized, star schemas, schema on read, Vault data model, graphs) for mixed workloads, such as OLTP, OLAP, streaming using any formats (structured, semi-structured, unstructured).
  • Applying leadership and proven experience with architecting and designing data implementation patterns and engineered solutions using native cloud capabilities that span data ingestion & integration (ingress and egress), data storage (raw & cleansed), data prep & processing, master & reference data management, data virtualization & semantic layer, data consumption & visualization.
  • Implementing cloud data solutions in the context of business applications, cost optimization, client's strategic needs and future growth goals as it relates to becoming a 'data driven' organization.
  • Applying and creating leading practices that support high availability, scalable, process and storage intensive solutions architectures to data integration/migration, analytics and insights, AI, and ML requirements.
  • Applying leadership and review to create high quality detailed documentation related to cloud data Engineering.
  • Understanding of one or more is a big plus -CI/CD, cloud devops, containers (Kubernetes/Docker, etc.), Python/PySpark/JavaScript.
  • Implementing cloud data orchestration and data integration patterns (AWS Glue, Azure Data Factory, Event Hub, Databricks, etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake)
  • Possessing a certification(s) in one of the following is a big plus - AWS/Azure/GCP data engineering, and Migration.

 

 

KEY REQUIREMENTS:

  • 10+ years’ experience as data engineer.
  • Must have 5+ Years in implementing data engineering solutions with multiple cloud providers and toolsets.
  • This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale.
  • Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers.
  • Must have worked with multiple database technologies and patterns. Good SQL experience for writing complex SQL transformation.
  • Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
  • Nice to have Databricks administration including security and infrastructure features of Databricks.
  • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration
Read more

Information Solution Provider Company

Agency job
via Jobdost by Saida Jabbar
icon
Delhi
icon
3 - 5 yrs
icon
₹3L - ₹10L / yr
Spark
Scala
Hadoop
PySpark
Data engineering
+2 more

Data Engineer 

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.

 

Read more
icon
Remote only
icon
4 - 7 yrs
icon
₹10L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
Big Data
Scala
+4 more

Job Description:

We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores.    The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

Responsibilities:

  • Develop, test, and implement data solutions based on functional / non-functional business requirements.
  • You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
  • Build Data Models to store the data in a most optimized manner
  • Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Implementing the ETL process and optimal data pipeline architecture
  • Monitoring performance and advising any necessary infrastructure changes.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Proactively identify potential production issues and recommend and implement solutions
  • Must be able to write quality code and build secure, highly available systems.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Review peer-codes and pipelines before deploying to Production for optimization issues and code standards

Skill Sets:

  • Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable data size (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Creation of DAGs for data engineering
  • Expert at Python /Scala programming, especially for data engineering/ ETL purposes

 

 

 

Read more
DP
Posted by Shanu Mohan
icon
Gurugram, Mumbai, Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹10L - ₹17L / yr
Python
PySpark
Amazon Web Services (AWS)
Spark
Scala
+2 more
  • Hands-on experience in any Cloud Platform
· Versed in Spark, Scala/python, SQL
  • Microsoft Azure Experience
· Experience working on Real Time Data Processing Pipeline
Read more
icon
Remote only
icon
5 - 8 yrs
icon
₹7L - ₹15L / yr
React Native
MongoDB
Kotlin
MariaDB
Python
+3 more

About Rara Delivery

Not just a delivery company…

RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.

RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.

We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳

Future of eCommerce Logistics.

  • Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
  • Revolutionising delivery as an experience
  • Empowering D2C Sellers with logistics as the core technology

A section describing benefits and life at your company is a great way to attract talent.

About the Role

  1. Architect, build, and maintain excellent React Native applications with clean code.
  2. Implement pixel-perfect UI's that match designs.
  3. Implement clean, modern, smooth animations and transitions that provide an excellent user experience.
  4. Integrate third-party APIs.
    1. Work as part of a small team to build applications.
  5. Write unit and integration tests.
  6. Release applications to IOS and Google Play stores.
  7. Work with native modules when required.
  8. Work as part of a small team, which will include other React Native developers, a project manager, QA professional, and designer.
  9. Complete two-week sprints and participate in sprint retrospectives and daily standups.
  10. Assist with building estimates.
  11. Interface with clients via Slack, Zoom, and email.
  12. Track your time throughout the day using Toggle.
  13. Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
  14. Be part of a community of React Native developers who share knowledge and help each other as problems arise
Read more
DP
Posted by Puneeta Mishra
icon
Remote only
icon
5 - 8 yrs
icon
₹7L - ₹15L / yr
Java
J2EE
Spring Boot
Hibernate (Java)
NodeJS (Node.js)
+5 more

About Rara Delivery

Not just a delivery company…

RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.

RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.

We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳

Future of eCommerce Logistics.

  • Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
  • Revolutionising delivery as an experience
  • Empowering D2C Sellers with logistics as the core technology


About the Role
  • 5 - 7 years Experience with the following technologies: Core Java/J2EE, Spring Boot, Creating API, Hibernate, JDBC, SQL/PLSQL, messaging architecture, REST/Web services, Linux
  • Expertise in application, data and infrastructure architecture disciplines
  • Advanced knowledge of architecture, design and business processes
  • 4+ years of Java, J2EE development experience
  • Strong technical development experience in effectively writing code, performing code reviews, and implementing best practices on configuration management and code refactoring
  • Experience in working with vendor applications
  • Experience in making optimized queries to MySQL database
  • Proven problem solving and analytical skills
  • A delivery-focused approach to work and the ability to work without direction
  • Experience in Agile development techniques, including Scrum
  • Experience implementing and/or using Git
  • Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
  • Bachelor degree in Computer Science or related discipline preferred
Read more
icon
Remote only
icon
2 - 8 yrs
icon
₹7L - ₹15L / yr
React Native
MongoDB
Redis
Kotlin
Python
+3 more

About Rara Delivery

Not just a delivery company…

RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.

RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.

We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳

Future of eCommerce Logistics.

  • Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
  • Revolutionising delivery as an experience
  • Empowering D2C Sellers with logistics as the core technology

A section describing benefits and life at your company is a great way to attract talent.
About the Role

  1. Architect, build, and maintain excellent React Native applications with clean code.
  2. Implement pixel-perfect UI's that match designs.
  3. Implement clean, modern, smooth animations and transitions that provide an excellent user experience.
  4. Integrate third-party APIs.
    1. Work as part of a small team to build applications.
  5. Write unit and integration tests.
  6. Release applications to IOS and Google Play stores.
  7. Work with native modules when required.
  8. Work as part of a small team, which will include other React Native developers, a project manager, QA professional, and designer.
  9. Complete two-week sprints and participate in sprint retrospectives and daily standups.
  10. Assist with building estimates.
  11. Interface with clients via Slack, Zoom, and email.
  12. Track your time throughout the day using Toggle.
  13. Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
  14. Be part of a community of React Native developers who share knowledge and help each other as problems arise
Read more

Information solution provider company

Agency job
via Jobdost by Saida Jabbar
icon
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
2 - 4 yrs
icon
₹5L - ₹9L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+2 more

Data Engineer 

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.
Read more
icon
Remote only
icon
3 - 15 yrs
icon
₹15L - ₹45L / yr
Java
.NET
Python
Scala
Object Oriented Programming (OOPs)
+2 more

Domains we work in

Multiple: publishing, retail, banking, networking, social sector, education and many more.

Tech we use

Java, Scala, Golang, Elixir, Python, RoR, .Net, JS frameworks, IOS, Android


More details on tech

You name it and we might be working on it. The important thing is not technology here but what kind of solutions we provide to our clients. We believe to solve some of the most complex problems, holistic thinking and solution design is of extreme importance. Technology is the most important tool to implement the solution thus designed. 


Skills & Requirements
Who should join us
We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.
So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project. 


What would you do here

* Work on complex, custom-designed, scalable, multi-tiered software development projects
* Work closely with clients (commercial & social enterprises, start ups), both Business and Technical staff members
* Be responsible for the quality of software and resolving any issues regards the solution
* Think through hard problems, not limited to technology and work with a team to realise and implement solutions
* Learn something new everyday


Below are key skills expected

* Development and delivery experience in any of the programming languages
* Passion for software engineering and craftsman-like coding prowess
* Great design and solutioning skills (OO & Functional)
* Experience including analysis, design, coding and implementation of large scale custom built object- oriented applications

* Understanding of code refactoring and optimisation issues
* Understanding of Virtualisation & DevOps. Experience with Ansible, Chef, Docker preferable * Ability to learn new technologies and adapt to different situations
* Ability to handle ambiguity on a day to day basis

Read more

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
icon
Hyderabad, Ahmedabad
icon
4 - 8 yrs
icon
₹15L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Snow flake schema
Python
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
DP
Posted by Baiju Sukumaran
icon
Remote only
icon
5 - 7 yrs
icon
₹9L - ₹23L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

Greetings from Rishabh Software!!


An opportunity to enhance your career as big data developer awits at your doorstep.

Here is brief about our company followed by JD:

About Us:

Rishabh Software(CMMi Level 3), an India based IT service provider, focuses on cost-effective, qualitative and timely delivered Offshore Software Development, Business Process Outsourcing (BPO) and Engineering Services. Our Core competency lies in developing customized software solutions using web-based and client/server technology.

With over 20 years of Software Development Experience working together with various domestic and international companies, we, at Rishabh Software, provide specific solutions as per the client requirements that help industries of different domains to change business problems into strategic advantages.

Through our offices in the US (Silicon Valley), UK (London) and India (Vadodara & Bangalore) we service our global clients with qualitative and well-executed software development, BPO and Engineering services.

Please visit our URL www.rishabhsoft.com

 

Job Description:

We are looking to hire a talented big data engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes, collaborate with development teams, build cloud platforms, and maintain the production system.

To ensure success as a big data engineer, you should have in-depth knowledge of Hadoop technologies and excellent problem-solving skills. A top-notch Big Data Engineer understands the needs of the company and institutes scalable data solutions for its current and future needs.

Responsibilities:

  • Regular interaction with client and internal team to understand the Big Data needs.
  • Developing Hadoop systems.
  • Loading disparate data sets and conducting pre-processing services using Hive or Pig.
  • Finalizing the scope of the system and delivering Big Data solutions.
  • Managing the communications between the internal system and the survey vendor.
  • Collaborating with the software research and development teams.
  • Building cloud platforms for the development of company applications.
  • Maintaining production systems.
  • Training staff on data resource management.

Requirements:

  • Bachelor’s degree in computer engineering or computer science
  • Previous experience as a big data engineer/developer
  • In-depth knowledge of Apache Spark, Hadoop, Sqoop, Pig, Hive and Kafka
  • In-depth knowledge and understanding of AWS cloud
  • Knowledge on one of the languages like Java, C++, Linux, Ruby, PHP, Python, and R
  • Knowledge of NoSQL and RDBMS databases including Redis and MongoDB
  • Excellent problem management skills
  • Training newly joined resource & support them to make their journey successful within the organization
  • Good communication skills
  • Ability to solve complex networking, data, and software issues


We will be happy to receive your response with latest resume if your profile matches with above JD and we shall connect back to have detailed discussion.

Read more

Persistent System Ltd

Agency job
via Milestone Hr Consultancy by Haina khan
icon
Pune, Bengaluru (Bangalore), Hyderabad
icon
4 - 9 yrs
icon
₹8L - ₹27L / yr
Python
PySpark
Amazon Web Services (AWS)
Spark
Scala
Greetings..

We have urgent requirement of Data Engineer/Sr Data Engineer for reputed MNC company.

Exp: 4-9yrs

Location: Pune/Bangalore/Hyderabad

Skills: We need candidate either Python AWS or Pyspark AWS or Spark Scala
Read more

Top 3 Fintech Startup

Agency job
via Jobdost by Sathish Kumar
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹12L - ₹21L / yr
Javascript
NodeJS (Node.js)
Python
Go Programming (Golang)
Scala
+1 more
Job Responsibilities:

● Write clean, reliable, reusable, scalable, testable and
maintainable code.
● Produce best in class documentation, testing and monitoring
● Estimate effort, identify risks
● Mentor/coach other engineers in the team to facilitate their
development and to provide technical leadership to them.
● Rise above details as and when needed to spot broader
issues/trends and implications for the product/team as a whole
● Practice and promote craftsmanship in software engineering
(coding, testing, code reviews, documentation, scalability,
performance, etc.)
● Break down requirements, estimate tasks, and assist in planning
roadmap accurately
● Develop iterative solutions to address expansive product goals
● Platformize components as libraries, utilities and promote reuse
● Be able to conceptualize and develop prototypes quickly
● Own large technical deliverables and execute in a structured
manner.
● Take accountability for the overall health of the products you
build and ensure predictability of the deliverables of your team.
● Drive technical roadmap of the team in collaboration with
Product and Business Teams.

Qualifications:
● B.Tech/BE/MCA in Computer Science or a related technical
discipline (or equivalent). Or high technical acumen and rich
technical experience.
● 4+ years of Expertise with modern Javascript in developing REST
web services
Read more
icon
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur
icon
4 - 9 yrs
icon
₹4L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Greetings..

We have an urgent requirements of Big Data Developer profiles in our reputed MNC company.

Location: Pune/Bangalore/Hyderabad/Nagpur
Experience: 4-9yrs

Skills: Pyspark,AWS
or Spark,Scala,AWS
or Python Aws
Read more
icon
Dubai
icon
2 - 4 yrs
icon
Best in industry
Python
Windows Azure
Java
Big Data
Scala

As a Data Engineer, your role will encompass: 

  • Designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture using Scala, Python, Talend etc.   
  • Gather and address technical and design requirements.  
  • Refactor existing applications to optimize its performance through setting the appropriate architecture and integrating the best practices and standards. 
  • Participate in the entire data life-cycle mainly focusing on coding, debugging, and testing. 
  • Troubleshoot and debug ETL Pipelines. 
  • Documentation of each process. 

Technical Requirements: - 

  • BSc degree in Computer Science/Computer Engineering. (Masters is a plus.) 
  • 2+ years of experience as a Data Engineer. 
  • In-depth understanding of core ETL concepts, Data Modelling, Data Lineage, Data Governance, Data Catalog, etc. 
  • 2+ years of work experience in Scala, Python, Java. 
  • Good Knowledge on Big Data Tools such as Spark/HDFS/Hive/Flume, etc. 
  • Hands on experience on ETL tools like Talend/Informatica is a plus. 
  • Good knowledge in Kafka and spark streaming is a big plus. 
  • 2+ years of experience in using Azure cloud and its resources/services (like Azure Data factory, Azure Databricks, SQL Synapse, Azure Devops, Logic Apps, Power Bi, Azure Event Hubs, etc). 
  • Strong experience in Relational Databases (MySQL, SQL Server)  
  • Exposure on data visualization tools like Power BI / Qlik sense / MicroStrategy 
  • 2+ years of experience in developing APIs (REST & SOAP protocols). 
  • Strong knowledge in Continuous Integration & Continuous Deployment (CI/CD) utilizing Docker containers, Jenkins, etc. 
  • Strong competencies in algorithms and software architecture. 
  • Excellent analytical and teamwork skills. 

 Good to have: - 

  • Previous on-prem working experience is a plus. 
  • In-depth understanding of the entire web development process (design, development, and deployment) 
  • Previous experience in automated testing including unit testing & UI testing. 

 

Read more

We believe in solving real-world complex digital challenges with an innovative design thinking approach

Agency job
via HyrHub by Shwetha Naik
icon
Mumbai
icon
4 - 7 yrs
icon
₹15L - ₹22L / yr
Scala
RESTful APIs
MySQL
Algorithms
SQL
+1 more
We expect the successful candidate to deliver high quality software and to be passionate about software engineering. You must have a proficient understanding of software development concepts. You will be responsible for developing easy to support scalable software and liaise with our platform teams. • 4+years’ of hands on professional experience using Scala, Core Java, Restful APIs, and related frameworks. •4 years’ hands on experience creating/consuming web services and data Data queries in SQL • 3+ years’ experience of working with geographically dispersed teams, that fall across different time zones • Experience with distributed architecture including web services and technologies • Developing POCs • Working knowledge of JIRA or other ALM tools to create a productive, high quality development environment • Solid understanding and experience with Object-Oriented design and development • Practiced understanding of Agile development methodologies & understanding of DevOps Integration • The ability to write reusable, optimized, maintainable code that is well documented and follows industry-standard best practices • Good problem solving skills • Good communication and presentation skills: ability to communicate in a clear and concise manner, across all stakeholder groups and with staff from junior to senior levels. PLs share your resume along with contact details .
Read more

PRODUCT ENGINEERING BASED MNC

Agency job
via Exploro Solutions by ramya ramchandran
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹12L - ₹25L / yr
Scala
Spark
Apache Spark
ETL
SQL

 Strong experience on SQL and relational databases

  • Good programming exp on Scala & Spark
  • Good exp on ETL batch data pipelines development and migration/upgrading
  • Python – Good to have.
  • AWS – Good to have
  • Knowledgeable in the areas of Big data/Hadoop/S3/HIVE. Nice to have exp on ETL frameworks (ex: Airflow, Flume, Oozie etc.)
  • Ability to work independently, take ownership and strong troubleshooting/debugging skills
  • Good communication and collaboration skills
Read more
DP
Posted by Jhalak Doshi
icon
Remote, Mumbai
icon
4 - 6 yrs
icon
₹1L - ₹15L / yr
Scala
Java
Akka
MySQL
Google Cloud Platform (GCP)
+2 more

Job Description:

TeamExtn is looking for a passionate Senior Scala Engineer. It will be expected from you to build pragmatic solutions on mission-critical initiatives. If you know your stuff, see the beauty in code, have knowledge in depth and breadth, advocate best practices, and love to work with distributed systems, then this is an ideal position for you.

As a core member of our Special Projects team, you will work on various new projects in a startup-like environment. These projects may include such things as building new APIs (REST/GraphQL/gRPC) for new products, integrating new products with core Carvana services, building highly scalable back end processing systems in Scala and integrating with systems backed by Machine Learning. You will use cutting edge functional Scala libraries such as ZIO. You will have the opportunity to work closely with our Product, Experience and Software Engineering teams to deliver impact.

Responsibilities:

  • Build highly scalable APIs and back end processing systems for new products
  • Contribute in the full software development lifecycle from design and development to testing and operating in production
  • Communicate effectively with engineers, product managers and data scientists
  • Drive scalability and performance within our distributed AI platform
  • Full software development lifecycle from design and development to testing and operating in production
  • Communicate effectively with engineers, product managers and data scientists

Skills And Experience:

  • 4+ years experience with Scala, Java or other functional language
  • Experience with Akka and Lightbend stack
  • Expert with PostgreSQL, MySQL or MS SQL
  • Experience in architecting, developing, deploying and operating large scale distributed systems and actor systems
  • Experience with cloud APIs (e.g., GCP, AWS, Azure)
  • Messaging systems such as GCP Pub/Sub, RabbitMQ, Kafka
  • Strong foundation in algorithms and data structures and their real-world use cases.
  • Solid understanding of computer systems and networks
  • Production quality coding standards and patterns

 

BONUS SKILLS:

  • Experience with functional programming in Scala
  • Knowledge of ZIO and related ecosystem
  • Experience with functional database libraries in Scala (Quill preferred)
  • Kubernetes and Docker
  • Elasticsearch
  • Typescript, React and frontend UI development experience
  • gRPC, GraphQL
Read more
icon
Mumbai, Pune
icon
8 - 14 yrs
icon
₹10L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
Job Responsibilities
1. Understand the business problem and translate these to data services and
engineering outcomes.
2. Expertise in working on cloud application designs, cloud approval plans, and
systems required to manage cloud storage.
3. Explore new technologies and learn new techniques to solve business problems
creatively
4. Collaborate with different teams - engineering and business, to build better data
products
5. Regularly evaluate cloud applications, hardware, and software.
6. Respond to technical issues in a professional and timely manner.
7. Identify the top cloud architecture solutions to successfully meet the strategic
needs of the company.
8. Offer guidance in infrastructure movement techniques including bulk application
transfers into the cloud.
9. Manage team and handle delivery of 2-3 projects
JD | Data Architect 24-Aug-2021
Solving for better 3 of 7

Qualifications
Is Education overrated? Yes. We believe so. But there is no way to locate you
otherwise. So we might look for at least a computer science, computer engineering,
information technology, or relevant field along with:

1. Over 4-6 years of experience in Data handling
2. Hands-on experience of any one programming language (Python, Java, Scala)
3. Understanding of SQL is must
4. Big data (Hadoop, Hive, Yarn, Sqoop)
5. MPP platforms (Spark, Presto)
6. Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
7. Streaming engines (Kafka, Storm, Spark Streaming)
8. Any Relational database or DW experience
9. Any ETL tool experience
10. Hands-on experience in pipeline design, ETL and application development
11. Hands-on experience in cloud platforms like AWS, GCP etc.
12. Good communication skills and strong analytical skills
13. Experience in team handling and project delivery
Read more

Information Solution Provider Company

Agency job
via Jobdost by Sathish Kumar
icon
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
2 - 7 yrs
icon
₹10L - ₹15L / yr
Spark
Scala
Hadoop
Big Data
Data engineering
+2 more

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.

 

Read more
DP
Posted by Nidhi Mishra
icon
Gurugram
icon
2 - 5 yrs
icon
₹12L - ₹13L / yr
Java
Scala
Big Data
Spark
Amazon Web Services (AWS)

Experience:

The candidate should have about 2+ years of experience with design and development in Java/Scala. Experience in algorithm, data-structure, database and distributed System is mandatory.

 

Required Skills:

Mandatory: -

  1. Core Java or Scala
  2. Experience in Big Data, Spark
  3. Extensive experience in developing spark job. Should possess good Oops knowledge and be aware of enterprise application design patterns.
  4. Should have the ability to analyze, design, develop and test complexity of spark job.
  5. Working knowledge of Unix/Linux.
  6. Hands on experience in Spark, creating RDD, applying operation - transformation-action

Good To have: -

  1. Python
  2. Spark streaming
  3. Py Spark
  4. Azure/AWS Cloud Knowledge of Data Storage and Compute side

 



Read more
DP
Posted by Nidhi Mishra
icon
Gurugram
icon
1 - 2 yrs
icon
₹7L - ₹8L / yr
Java
Scala

Experience:

 

The candidate should have about 1+ years of experience with design and development in Java/Scala. Experience in algorithm, data-structure, database is mandatory.

 

Required Skills:

  1. Java or Scala
  2. Extensive experience in developing web applications. Should possess good Oops knowledge and be aware of enterprise application design patterns.
  3. Should have the ability to analyze, design, develop and test complexity of spark job.
  4. Basic working knowledge of Unix/Linux.

Good To have: -

  1. Python
  2. Distributed Computing
Read more
icon
Remote only
icon
3 - 6 yrs
icon
₹15L - ₹23L / yr
Object Oriented Programming (OOPs)
Amazon Web Services (AWS)
Java
J2EE
Spring Boot
+4 more

Your Opportunity

  • Own and drive business features into tech requirements
  • Design & develop large scale real time server side systems
  • Quickly create quality prototypes
  • Staying updated on emerging technologies
  • Ensuring that all deliverables adhere to our world class standards
  • Promote coding best practices
  • Mentor and develop junior developers in the team

 

Required Experience:

  • 4+ years of relevant experience as described below
  • Excellent grasp of Core Java, Multi Threading and OO design patterns
  • Experience with Scala, functional, reactive programming and Akka/Play is a plus
  • Excellent understanding of data structures and algorithms
  • Solid grasp of large scale distributed real time systems
  • Prior experience on building a scalable and resilient micro service
  • Solid understanding of relational databases, NoSQL databases and Caching systems
  • Good understanding of Big Data technologies such as Spark, Hadoop is a plus
  • Experience on one of AWS, Azure or GCP

 

Who you are :

  • You have excellent and effective communication and collaborative skills
  • You love problem solving
  • You stay up to date with the latest technologies and then apply them in real life
  • You love paying attention to detail
  • You thrive in meeting tight deadlines and prioritising workloads
  • Ability to collaborate across multiple functions

 

Education:

Bachelor’s degree in Engineering or equivalent experience within the field

Read more
icon
Noida
icon
4 - 7 yrs
icon
₹30L - ₹38L / yr
Java
Javascript
jQuery
HTML/CSS
Docker
+11 more

Key Responsibilities:

  • Design and implement scalable server-side solutions using Java.
  • Write optimized front-end code using HTML, CSS, and Javascript
  • Write unit, automation, and integration tests
  • Implement quality application logging for operational monitoring at scale
  • Investigate, debug and resolve production site issues
  • Work with co-located teammates to deliver on common goals

TECHNICAL & FUNCTIONAL REQUIREMENTS:

  • Professional experience in enterprise Java software development using Spring MVC frameworks, RESTful APIs, and SOA
  • Proficiency in HTML/CSS/JavaScript/jQuery
  • Experience with Docker, Kubernetes, and microservices
  • Experience with Selenium for UI automated tests written in Cucumber or Scala will be a plus
  • Working knowledge of design patterns and CI/CD principles
  • First class communication skills in written and verbal form
  • Outstanding problem-solving skills
  • A commitment to producing high-quality code with an attention to detail
  • Dedication and a self-motivated desire to learn
  • A collaborative, team-orientated attitude
  • Experience working in the Cloud (AWS)
  • API development experience
  • Exposure to monitoring tools such as ELK, Splunk
Read more
DP
Posted by Priya Goyal
icon
Agra
icon
3 - 5 yrs
icon
₹6L - ₹10L / yr
Java
Scala
Apache Spark
Spark
Hadoop
+1 more
Major Accountabilities

Collaborate with the CIO on application Architecture and Design of our ETL (Extract, Transform,
Load) and other aspects of Data Pipelines. Our stack is built on top of the well-known Spark
Ecosystem (e.g. Scala, Python, etc.)
Periodically evaluate architectural landscape for efficiencies in our Data Pipelines and define current
state, target state architecture and transition plans, road maps to achieve desired architectural state
Conducts/leads and implements proof of concepts to prove new technologies in support of
architecture vision and guiding principles (e.g. Flink)
Assist in the ideation and execution of architectural principles, guidelines and technology standards
that can be leveraged across the team and organization. Specially around ETL & Data Pipelines
Promotes consistency between all applications leveraging enterprise automation capabilities
Provide architectural consultation, support, mentoring, and guidance to project teams, e.g. architects,
data scientist, developers, etc.
Collaborate with the DevOps Lead on technical features
Define and manage work items using Agile methodologies (Kanban, Azure boards, etc) Leads Data
Engineering efforts (e.g. Scala Spark, PySpark, etc)

Knowledge & Experience
Experienced with Spark, Delta Lake, and Scala to work with Petabytes of data (to work with Batch
and Streaming flows)
Knowledge of a wide variety of open source technologies including but not limited to; NiFi,
Kubernetes, Docker, Hive, Oozie, YARN, Zookeeper, PostgreSQL, RabbitMQ, Elasticsearch
A strong understanding of AWS/Azure and/or technology as a service (Iaas, SaaS, PaaS)
Strong verbal and written communications skills are a must, as well as the ability to work effectively
across internal and external organizations and virtual teams
Appreciation of building high volume, low latency systems for the API flow
Core Dev skills (SOLID principles, IOC, 12-factor app, CI-CD, GIT)
Messaging, Microservice Architecture, Caching (Redis), Containerization, Performance, and Load
testing, REST APIs
Knowledge of HTML, JavaScript frameworks (preferably Angular 2+), Typescript
Appreciation of Python and C# .NET Core or Java Appreciation of global data privacy requirements
and cryptography
Experience in System Testing and experience of automated testing e.g. unit tests, integration tests,
mocking/stubbing
Relevant industry and other professional qualifications
Tertiary qualifications (degree level)
We are an inclusive employer and welcome applicants from all backgrounds. We pride ourselves on
our commitment to Equality and Diversity and are committed to removing barriers throughout our
hiring process.

Key Requirements

Extensive data engineering development experience (e.g., ETL), using well known stacks (e.g., Scala
Spark)
Experience in Technical Leadership positions (or looking to gain experience)
Background software engineering
The ability to write technical documentation
Solid understanding of virtualization and/or cloud computing technologies (e.g., docker, Kubernetes)
Experience in designing software solutions and enjoys UML and the odd sequence diagram
Experience operating within an Agile environment Ability to work independently and with minimum
supervision
Strong project development management skills, with the ability to successfully manage and prioritize
numerous time pressured analytical projects/work tasks simultaneously
Able to pivot quickly and make rapid decisions based on changing needs in a fast-paced environment
Works constructively with teams and acts with high integrity
Passionate team player with an inquisitive, creative mindset and ability to think outside the box.
Read more

Wealth Management Platform

Agency job
via Qrata by Prajakta Kulkarni
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹30L - ₹35L / yr
Java
Python
Ruby
Ruby on Rails (ROR)
Go Programming (Golang)
+5 more
Hello Everyone,

Trying to get in touch with you all for an exciting role for 1 of the Startup Firm into Wealtth Mangement .

A small description about the Company.

This Company is building the platform to drive Wealth Mangement .
They own and operate an online investing platform that distributes mutual funds in India. Its platform allows investors to buy and sell equity, debt, and tax saving mutual funds. It has its headquarters in Bengaluru in India.

 
 


Looking for Great Talent for Backend Developer with beloww skills.


• Excellent knowledge of at least one ecosystem based on Elixir/Phoenix, Ruby/Rails, Python/Django,
Go/Scala/Clojure
• Good OO skills, including strong design patterns knowledge
• Familiar with datastores like MySQL, PostgreSQL, Redis, Redshift etc.
• Familiarity with react.js/react-native, vue.js etc. • Knowledge of deploying software to AWS, GCP, Azure
• Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration.
Read more

They platform powered by machine learning. (TE1)

Agency job
via Multi Recruit by Paramesh P
icon
Bengaluru (Bangalore)
icon
1.5 - 4 yrs
icon
₹8L - ₹16L / yr
Scala
Java
Spark
Hadoop
Rest API
+1 more
  • Involvement in the overall application lifecycle
  • Design and develop software applications in Scala and Spark
  • Understand business requirements and convert them to technical solutions
  • Rest API design, implementation, and integration
  • Collaborate with Frontend developers and provide mentorship for Junior engineers in the team
  • An interest and preferably working experience in agile development methodologies
  • A team player, eager to invest in personal and team growth
  • Staying up to date with cutting edge technologies and best practices
  • Advocate for improvements to product quality, security, and performance

 

Desired Skills and Experience

  • Minimum 1.5+ years of development experience in Scala / Java language
  • Strong understanding of the development cycle, programming techniques, and tools.
  • Strong problem solving and verbal and written communication skills.
  • Experience in working with web development using J2EE or similar frameworks
  • Experience in developing REST API’s
  • BE in Computer Science
  • Experience with Akka or Micro services is a plus
  • Experience with Big data technologies like Spark/Hadoop is a plus company offers very competitive compensation packages commensurate with your experience. We offer full benefits, continual career & compensation growth, and many other perks.

 

Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort