Cutshort logo
Scala Jobs in Bangalore (Bengaluru)

50+ Scala Jobs in Bangalore (Bengaluru) | Scala Job openings in Bangalore (Bengaluru)

Apply to 50+ Scala Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Scala Job opportunities across top companies like Google, Amazon & Adobe.

icon
Sentienz

at Sentienz

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 12 yrs
Best in industry
skill iconScala
Apache Spark
skill iconPython
SQL
NOSQL Databases
+3 more

Key Responsibilities


  • Design, develop, and optimize data pipelines using Apache Spark to process large volumes of structured and unstructured data.
  • Write efficient and maintainable code in Scala and Python for data extraction, transformation, and loading (ETL) operations.
  • Collaborate with cross-functional teams to define data engineering solutions to support analytics and machine learning initiatives.
  • Implement and maintain data lake and warehouse solutions using cloud platforms (e.g., AWS, GCP, Azure).
  • Ensure data workflows and distributed systems' performance, scalability, and reliability.
  • Perform data quality assessments, implement monitoring, and improve data governance practices.
  • Assist in migrating and refactoring legacy data systems into modern distributed data processing platforms.
  • Provide technical leadership and mentorship to junior engineers and contribute to best practices in coding, testing, and deployment.

Qualifications


  • Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
  • 8+ years of hands-on experience in data engineering, with strong skills in Apache Spark, Scala, and Python.
  • Experience with distributed data processing frameworks and real-time data processing.
  • Strong experience with big data technologies such as Hadoop, Hive, and Kafka.
  • Proficient with relational databases (SQL, PostgreSQL, MySQL) and NoSQL databases (Cassandra, HBase, MongoDB). 
  • Knowledge of CI/CD pipelines and DevOps practices for deploying data workflows.
  • Strong problem-solving skills and experience with optimizing large-scale data systems.
  • Excellent communication and collaboration skills.
  • Experience with orchestration tools like Airflow
  • Experience with containerization and orchestration (e.g., Docker, Kubernetes)
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Mumbai, Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
skill iconScala
+1 more

Skills Required:


- 4+ years of technical experience in a developer role

- Strong proficiency with Core Java

- Database experience preferably with DB2, Sybase, or Oracle

- Complete SDLC process and Agile Methodology (Scrum)

- Strong oral and written communication skills

- Excellent interpersonal skills and professional approach

- Bachelor’s degree in Computer Science, MIS, or other technology/engineering discipline


Skill Desired:


-Strong proficiency with Scala on Spark

- Previous experience in front office and back office reports

- Strong understanding Order Life Cycle management from Equities or Listed derivatives perspective

- Previous experience in Trade Surveillance or working with data from the order lifecycle

- Good to have knowledge on Hadoop Technologies

- High quality software architecture and design methodologies and patterns

- Work experience as level-3 support for applications

- Layered Architecture, Component based Architecture

- XML-based technologies

- Unix OS, Scripting, Python or Perl

- Experience in development on other application types (Web applications, batch, or streaming)

Read more
Sentienz Solutions Private Limited
Mihika Haridas
Posted by Mihika Haridas
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹13L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

About Us:

Established in the early months of 2016, Sentienz is an IoT, AI, and big data technology company that you can join. A dynamic team of highly skilled data scientists and engineers who specialize in building internet-scale platforms and petabyte-scale digital insights solutions. We are experts in developing state-of-the-art machine learning models as well as advanced analytics platforms.

Sentienz is proud of its flagship product, Sentienz Akiro, an AI-powered connectivity platform. By allowing users to communicate easily across their devices, Akiro revolutionizes consumer engagement. Moreover, it offers essential feedback on customer involvement within your app. In addition to these factors, Akiro enables IoT M2M communication by providing unmatched real-time access.

If you want to be part of the future of AI-driven connectivity and IoT innovation at Sentienz, then join us today!


Position: Senior Data Engineer


Job Specifications:

  • 5+ years of experience in data engineering or a similar role.
  • Proficiency in programming languages such as Scala, Java, and Python.
  • Experience with big data technologies such as Spark.
  • Strong SQL skills and experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Hands-on experience with data integration tools like Treeno and Airbyte.
  • Experience with scheduling and workflow automation tools such as Dolphin Scheduler.
  • Familiarity with data visualization tools like Superset.
  • Hands-on experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and their data services.
  • Bachelor/master’s degree in computer science or a related field.
  • Excellent problem-solving skills, attention to detail, and good communication skills.
  • Candidates must be fast learners and adaptable to a fast-paced development environment.


Location: On-site; Bangalore/Bangalore Urban

Availability to Join: Immediate Joiners

Read more
Confidential

Confidential

Agency job
via Arnold Consultants by Sampreetha Pai
Bengaluru (Bangalore)
8 - 13 yrs
₹30L - ₹35L / yr
skill iconJava
skill iconMongoDB
skill iconC#
skill iconPython
skill iconNodeJS (Node.js)
+3 more

About this roleWe are seeking an experienced MongoDB Developer/DBA who will be

responsible for maintaining MongoDB databases while optimizing performance, security, and

the availability of MongoDB clusters. As a key member of our team, you’ll play a crucial role in

ensuring our data infrastructure runs smoothly.

You'll have the following responsibilities

 Maintain and Configure MongoDB Instances - Responsible for build, design, deploy,

maintain, and lead the MongoDB Atlas infrastructure. Keep clear documentation of the

database setup and architecture.

 Ownership of governance, defining and enforcing policies in MongoDB Atlas.Provide

consultancy in drawing the design and infrastructure (MongoDB Atlas) for use case.

 Service and Governance wrap will be in place to restrict over provisioning for server size,

number of clusters per project and scaling through MongoDB Atlas

 Gathering and documenting detailed business requirements applicable to the data

layer.Responsible for designing, configuring and managing MongoDB on Atlas.

 Design, develop, test, document, and deploy high-quality technical solutions on the

MongoDB Atlas platform based on industry best practices to solve business needs.

Resolves technical issues raised by the team and/or customer and manages escalations as

required.

 Migrate data from on-premise MongoDB and RDBMS to MongoDB AtlasCommunicate

and collaborate with other technical resources and customers in providing timely updates

on status of deliverables, shedding light on technical issues, and obtaining buy-in on

creative solutions.

 Write procedures for backup and disaster recovery.


You'll have the following skills & experience

 Excellent analytical, diagnostic skills, and problem-solving skills

 Should understand the Database concept and develop expertise in designing and

developing NoSQL databases such as MongoDB

 MongoDB query operation, import and export operation in database

 Experience in ETL methodology for performing Data Migration, Extraction,

Transformation, Data Profiling and Loading

 Migrating database by ETL, migrating database by manual process and designing,

development, implementation

 General networking skills, especially in the context of a public cloud (e.g. AWS – VPC,

subnets, routing tables, nat / internet gateways, dns, security groups)

 Experience using Terraform as an IaC tool for setting up infrastructure on AWS

CloudPerforming database backups and recovery

 Competence in at least one of the following languages (in no particular order): Java, C++,

C#, Python, Node.js (JavaScript), Ruby, Perl, Scala, Go

 Excellent communication skills, often being able to compromise but draw out risks and

constraints associated with solutions. Be able to work independently and collaborate with

other teams

 Proficiency in configuring schema and MongoDB data modeling.


 Strong understanding of SQL and NoSQL databases.

 Comfortable with MongoDB syntax.

 Experience with database security management.

 Performance Optimization - Ensure databases achieve maximum performance and

availability. Design effective indexing strategies.

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Eloelo

at Eloelo

1 recruiter
Vikas Saini
Posted by Vikas Saini
Bengaluru (Bangalore)
4 - 7 yrs
₹15L - ₹30L / yr
skill iconPython
skill iconScala
MS SQLServer
skill iconAmazon Web Services (AWS)

Design a multi-tier data pipeline to feed data into applications for building a full-featured analytics environment. Develop high-quality code to support the platform's technical architecture and design. Participate and contribute to an effective software development lifecycle using Scrum and Agile. Collaborate with global teams and work as one team. what you get to do:

You'll work on the design, implementation, and maintenance of data pipelines. Design and build database schemas to handle large-scale data migration & transformation. Capable of designing a high-performance, scalable, distributed product in the cloudAWS, GCS. Review developmental frameworks, and coding standards, conducts code reviews and walkthroughs, and conduct in-depth design reviews. Identify gaps in the existing infrastructure and advocate for the necessary changes to close them. Who we are looking for:

4 to 7 years of industry experience working in Spark and Scala/Python. Working experience with big-data tech stacks like Spark, Kafka & Athena. Extensive experience in SQL query optimization/tuning and debugging SQL performance issues. Experience in ETL/ELT process to move data through the data processing pipeline. Be a fearless leader in championing smart design. Top 3 primary skills and expertise level requirements ( 1 to 5; 5 being expert)

Excellent programming experience in Scala or Python. Good experience in SQL queries and optimizations. 2 to 3 years of Spark experience. Nice to have experience in Airflow. Nice to have experience with AWS EMR, Lambda, and S3.

Employment Type - FULLTIME

Industry Type - Media / Entertainment / Internet

Seniority Level - Mid-Senior-Level

Work Experience(in years) - 4 - 7 Years

Education - B.Tech/B.E.

Skills - Python, Scala, Ms Sql Server, Aws

Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
  1. Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
  2. A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
  3. Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
  4. Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
  5. Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
  6. Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
  7. Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
  8. Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
  9. Exposure to Cloudera development environment and management using Cloudera Manager.
  10. Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
  11. Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
  12. Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
  13. Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
  14. Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
  15. Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
  16. Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
  17. Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
  18. In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
  19. Hands on expertise in real time analytics with Apache Spark.
  20. Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
  21. Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
  22. Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
  23. Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
  24. Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
  25. Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
  26. Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
  27. Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
  28. In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
  29. Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis. 
  30. Generated various kinds of knowledge reports using Power BI based on Business specification. 
  31. Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
  32. Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
  33. Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
  34. Good experience with use-case development, with Software methodologies like Agile and Waterfall.
  35. Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
  36. Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
  37. Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
  38. Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
  39. Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Read more
Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹28L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.

Read more
Conviva

at Conviva

1 recruiter
Anusha Bondada
Posted by Anusha Bondada
Bengaluru (Bangalore)
3 - 6 yrs
₹20L - ₹40L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

As Conviva is expanding, we are building products providing deep insights into end-user experience for our customers.

 

Platform and TLB Team

The vision for the TLB team is to build data processing software that works on terabytes of streaming data in real-time. Engineer the next-gen Spark-like system for in-memory computation of large time-series datasets – both Spark-like backend infra and library-based programming model. Build a horizontally and vertically scalable system that analyses trillions of events per day within sub-second latencies. Utilize the latest and greatest big data technologies to build solutions for use cases across multiple verticals. Lead technology innovation and advancement that will have a big business impact for years to come. Be part of a worldwide team building software using the latest technologies and the best of software development tools and processes.

 

What You’ll Do

This is an individual contributor position. Expectations will be on the below lines:

  • Design, build and maintain the stream processing, and time-series analysis system which is at the heart of Conviva’s products
  • Responsible for the architecture of the Conviva platform
  • Build features, enhancements, new services, and bug fixing in Scala and Java on a Jenkins-based pipeline to be deployed as Docker containers on Kubernetes
  • Own the entire lifecycle of your microservice including early specs, design, technology choice, development, unit-testing, integration-testing, documentation, deployment, troubleshooting, enhancements, etc.
  • Lead a team to develop a feature or parts of a product
  • Adhere to the Agile model of software development to plan, estimate, and ship per business priority

 

What you need to succeed

  • 5+ years of work experience in software development of data processing products.
  • Engineering degree in software or equivalent from a premier institute.
  • Excellent knowledge of fundamentals of Computer Science like algorithms and data structures. Hands-on with functional programming and know-how of its concepts
  • Excellent programming and debugging skills on the JVM. Proficient in writing code in Scala/Java/Rust/Haskell/Erlang that is reliable, maintainable, secure, and performant
  • Experience with big data technologies like Spark, Flink, Kafka, Druid, HDFS, etc.
  • Deep understanding of distributed systems concepts and scalability challenges including multi-threading, concurrency, sharding, partitioning, etc.
  • Experience/knowledge of Akka/Lagom framework and/or stream processing technologies like RxJava or Project Reactor will be a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
  • Excellent communication skills. Willingness to work under pressure. Hunger to learn and succeed. Comfortable with ambiguity. Comfortable with complexity

 

Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their business ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include: DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision and Warner Bros Discovery.  

Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets! 


Read more
Kloud9 Technologies
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Google Cloud Platform (GCP)
PySpark
skill iconPython
skill iconScala

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


●    Overall 8+ Years of Experience in Web Application development.

●    5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware

●    3+ Years of Designing Middleware using Node JS platform.

●    good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.

●    Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.

●    Good Experience with TDD Driven Development and Automated Unit Testing.

●    Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.

●    Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).

●    Experience supporting and working with cross-functional teams in a dynamic environment.

●    Experience working in Agile Scrum Methodology.

●    Very good Problem-Solving Skills.

●    Very good learner and passion for technology.

●     Excellent verbal and written communication skills in English

●     Ability to communicate effectively with team members and business stakeholders


Secondary Skill Requirements:

 

● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Inviz Ai Solutions Private Limited
Shridhar Nayak
Posted by Shridhar Nayak
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more

InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints. 

 

TSDE - Data 

Data Engineer:

 

  • Should have total 3-6 Yrs of experience in Data Engineering.
  • Person should have experience in coding data pipeline on GCP. 
  • Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience. 
  • Strong on programming languages like Scala, Python, Java. 
  • Good understanding of various data storage formats and it’s advantages. 
  • Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). 
  • Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. 
  • Data Engineer Certification preferred 

 

Experience in Working with GCP tools like

 
 

Store :  CloudSQL , Cloud Storage, Cloud Bigtable,  Bigquery, Cloud Spanner, Cloud DataStore

 

Ingest :  Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services

 

Schedule : Cloud Composer

 

Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep

 

CI/CD - Bitbucket+Jenkinjs / Gitlab

 

Atlassian Suite

 

 

 

 .
Read more
HL
Bengaluru (Bangalore)
6 - 15 yrs
₹1L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
• 8+ years of experience in developing Big Data applications
• Strong experience working with Big Data technologies like Spark (Scala/Java),
• Apache Solr, HIVE, HBase, ElasticSearch, MongoDB, Airflow, Oozie, etc.
• Experience working with Relational databases like MySQL, SQLServer, Oracle etc.
• Good understanding of large system architecture and design
• Experience working in AWS/Azure cloud environment is a plus
• Experience using Version Control tools such as Bitbucket/GIT code repository
• Experience using tools like Maven/Jenkins, JIRA
• Experience working in an Agile software delivery environment, with exposure to
continuous integration and continuous delivery tools
• Passionate about technology and delivering solutions to solve complex business
problems
• Great collaboration and interpersonal skills
• Ability to work with team members and lead by example in code, feature
development, and knowledge sharing
Read more
HCL Technologies

at HCL Technologies

3 recruiters
Agency job
via Saiva System by Sunny Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Hyderabad, Chennai, Pune, Mumbai, Kolkata
5 - 10 yrs
₹5L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more
Exp- 5 + years
Skill- Spark and Scala along with Azure
Location - Pan India

Looking for someone Bigdata along with Azure
Read more
Perfios

Perfios

Agency job
via Seven N Half by Susmitha Goddindla
Bengaluru (Bangalore)
4 - 6 yrs
₹4L - ₹15L / yr
SQL
ETL tool
python developer
skill iconMongoDB
skill iconData Science
+15 more
Job Description
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
Read more
QUT

QUT

Agency job
via Hiringhut Solutions Pvt Ltd by Neha Bhattarai
Bengaluru (Bangalore)
4 - 7 yrs
₹7L - ₹10L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Apache Kafka
+5 more
What You'll Do

•Design and develop distributed, scalable, high availability web services.
•Work independently completing small to Mid-sized projects while
managing competing priorities in a demanding production environment.
•you will be writing reusable and maintainable quality code.

What You'll Bring

•BS in CS (or equivalent) and 4+ years of hands-on software design and
development experience in building high-availability, scalable backend
systems.
•hands-on coding experience is a must.
•Expertise in working on Java technology stacks in Linux environment -
Java, Spring/ Hibernate, MVC frameworks, TestNG, JUnit.
•Expertise in Database Schema Design, performance efficiency, and SQL
working on leading RDBMS such as MySQL, Oracle, MSSQL, etc.
•Expertise in OOAP, Restful Web Services, and building scalable systems
Preferred Qualifications:
•Experience using Platforms such as Drools, Solr, Memcached, AKKA, Scala,
Kafka etc. is a plus
•Participation in and Contributions to Open-Source Software Development and contributions
Read more
QUT

QUT

Agency job
via Hiringhut Solutions Pvt Ltd by Neha Bhattarai
Bengaluru (Bangalore)
3 - 7 yrs
₹1L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
What You'll Bring

•3+ years of experience in big data & data warehousing technologies
•Experience in processing and organizing large data sets
•Experience with big data tool sets such Airflow and Oozie

•Experience working with BigQuery, Snowflake or MPP, Kafka, Azure, GCP and AWS
•Experience developing in programming languages such as SQL, Python, Java or Scala
•Experience in pulling data from variety of databases systems like SQL Server, maria DB, Cassandra
NOSQL databases
•Experience working with retail, advertising or media data at large scale
•Experience working with data science engineering, advanced data insights development
•Strong quality proponent and thrives to impress with his/her work
•Strong problem-solving skills and ability to navigate complicated database relationships
•Good written and verbal communication skills , Demonstrated ability to work with product
management and/or business users to understand their needs.
Read more
Tier 1 MNC

Tier 1 MNC

Agency job
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
3 - 12 yrs
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more
This company provides on-demand cloud computing platforms.

This company provides on-demand cloud computing platforms.

Agency job
via New Era India by Niharica Singh
Remote, Pune, Mumbai, Bengaluru (Bangalore), Gurugram, Hyderabad
15 - 25 yrs
₹35L - ₹55L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
Architecture
skill iconPython
+5 more
  • 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
  • 15+ years of experience as a technical specialist in Customer-facing roles.
  • Ability to travel to client locations as needed (25-50%)
  • Extensive experience architecting, designing and programming applications in an AWS Cloud environment
  • Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
  • Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
  • Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
  • Agile software development expert
  • Experience with continuous integration tools (e.g. Jenkins)
  • Hands-on familiarity with CloudFormation
  • Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
  • Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
  • Strong practical application development experience on Linux and Windows-based systems
  • Extra curricula software development passion (e.g. active open source contributor)
Read more
Product based company

Product based company

Agency job
via Zyvka Global Services by Ridhima Sharma
Bengaluru (Bangalore)
3 - 12 yrs
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
Nascentvision

at Nascentvision

1 recruiter
Shanu Mohan
Posted by Shanu Mohan
Gurugram, Mumbai, Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹17L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
Spark
skill iconScala
+2 more
  • Hands-on experience in any Cloud Platform
· Versed in Spark, Scala/python, SQL
  • Microsoft Azure Experience
· Experience working on Real Time Data Processing Pipeline
Read more
Persistent System Ltd

Persistent System Ltd

Agency job
via Milestone Hr Consultancy by Haina khan
Pune, Bengaluru (Bangalore), Hyderabad
4 - 9 yrs
₹8L - ₹27L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
Spark
skill iconScala
Greetings..

We have urgent requirement of Data Engineer/Sr Data Engineer for reputed MNC company.

Exp: 4-9yrs

Location: Pune/Bangalore/Hyderabad

Skills: We need candidate either Python AWS or Pyspark AWS or Spark Scala
Read more
Top 3 Fintech Startup

Top 3 Fintech Startup

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
5 - 8 yrs
₹12L - ₹21L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconPython
skill iconGo Programming (Golang)
skill iconScala
+1 more
Job Responsibilities:

● Write clean, reliable, reusable, scalable, testable and
maintainable code.
● Produce best in class documentation, testing and monitoring
● Estimate effort, identify risks
● Mentor/coach other engineers in the team to facilitate their
development and to provide technical leadership to them.
● Rise above details as and when needed to spot broader
issues/trends and implications for the product/team as a whole
● Practice and promote craftsmanship in software engineering
(coding, testing, code reviews, documentation, scalability,
performance, etc.)
● Break down requirements, estimate tasks, and assist in planning
roadmap accurately
● Develop iterative solutions to address expansive product goals
● Platformize components as libraries, utilities and promote reuse
● Be able to conceptualize and develop prototypes quickly
● Own large technical deliverables and execute in a structured
manner.
● Take accountability for the overall health of the products you
build and ensure predictability of the deliverables of your team.
● Drive technical roadmap of the team in collaboration with
Product and Business Teams.

Qualifications:
● B.Tech/BE/MCA in Computer Science or a related technical
discipline (or equivalent). Or high technical acumen and rich
technical experience.
● 4+ years of Expertise with modern Javascript in developing REST
web services
Read more
Persistent Systems

at Persistent Systems

1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur
4 - 9 yrs
₹4L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Greetings..

We have an urgent requirements of Big Data Developer profiles in our reputed MNC company.

Location: Pune/Bangalore/Hyderabad/Nagpur
Experience: 4-9yrs

Skills: Pyspark,AWS
or Spark,Scala,AWS
or Python Aws
Read more
Wealth Management Platform

Wealth Management Platform

Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
4 - 7 yrs
₹30L - ₹35L / yr
skill iconJava
skill iconPython
skill iconRuby
skill iconRuby on Rails (ROR)
skill iconGo Programming (Golang)
+5 more
Hello Everyone,

Trying to get in touch with you all for an exciting role for 1 of the Startup Firm into Wealtth Mangement .

A small description about the Company.

This Company is building the platform to drive Wealth Mangement .
They own and operate an online investing platform that distributes mutual funds in India. Its platform allows investors to buy and sell equity, debt, and tax saving mutual funds. It has its headquarters in Bengaluru in India.

 
 


Looking for Great Talent for Backend Developer with beloww skills.


• Excellent knowledge of at least one ecosystem based on Elixir/Phoenix, Ruby/Rails, Python/Django,
Go/Scala/Clojure
• Good OO skills, including strong design patterns knowledge
• Familiar with datastores like MySQL, PostgreSQL, Redis, Redshift etc.
• Familiarity with react.js/react-native, vue.js etc. • Knowledge of deploying software to AWS, GCP, Azure
• Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration.
Read more
They platform powered by machine learning. (TE1)

They platform powered by machine learning. (TE1)

Agency job
via Multi Recruit by Paramesh P
Bengaluru (Bangalore)
1.5 - 4 yrs
₹8L - ₹16L / yr
skill iconScala
skill iconJava
Spark
Hadoop
Rest API
+1 more
  • Involvement in the overall application lifecycle
  • Design and develop software applications in Scala and Spark
  • Understand business requirements and convert them to technical solutions
  • Rest API design, implementation, and integration
  • Collaborate with Frontend developers and provide mentorship for Junior engineers in the team
  • An interest and preferably working experience in agile development methodologies
  • A team player, eager to invest in personal and team growth
  • Staying up to date with cutting edge technologies and best practices
  • Advocate for improvements to product quality, security, and performance

 

Desired Skills and Experience

  • Minimum 1.5+ years of development experience in Scala / Java language
  • Strong understanding of the development cycle, programming techniques, and tools.
  • Strong problem solving and verbal and written communication skills.
  • Experience in working with web development using J2EE or similar frameworks
  • Experience in developing REST API’s
  • BE in Computer Science
  • Experience with Akka or Micro services is a plus
  • Experience with Big data technologies like Spark/Hadoop is a plus company offers very competitive compensation packages commensurate with your experience. We offer full benefits, continual career & compensation growth, and many other perks.

 

Read more
Big revolution in the e-gaming industry. (GK1)

Big revolution in the e-gaming industry. (GK1)

Agency job
via Multi Recruit by Ayub Pasha
Bengaluru (Bangalore)
2 - 3 yrs
₹15L - ₹20L / yr
skill iconPython
skill iconScala
Hadoop
Spark
Data Engineer
+4 more
  • We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
  • The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
  • The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
  • Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
  • You should be able to work in a high-volume environment, have outstanding planning and organisational skills.

 

Qualifications for Data Engineer

 

  • Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.

 

What we're looking for?

 

  • Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
  • Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
  • Experience with data pipeline and workflow management tools: Luigi, Airflow.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Amazon India

at Amazon India

1 video
58 recruiters
Akhil Ravipalli
Posted by Akhil Ravipalli
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Chennai, Pune
2 - 9 yrs
₹15L - ₹60L / yr
Systems design
Data Structures
Algorithms
skill iconJava
skill iconPython
+6 more

As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.

 

Top Skills

 

  • You write high quality, maintainable, and robust code, often in Java or C++/C/Python/ROR/C#
  • You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.
  • You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.

Roles & Responsibilities

 

  • You solve problems at their root, stepping back to understand the broader context.
  • You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.
  • You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.
  • You recognize and use design patterns to solve business problems.
  • You understand how operating systems work, perform and scale.
  • You continually align your work with Amazon’s business objectives and seek to deliver business value.
  • You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.
  • You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.
  • You communicate clearly with your team and with other groups and listen effectively.

 

Skills & Experience

 

  • Bachelors or Masters in Computer Science or relevant technical field.
  • Experience in software development and full product life-cycle.
  • Excellent programming skills in any object oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.
  • Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
  • Proficiency in SQL and data modeling.
Read more
Amazon India

at Amazon India

1 video
58 recruiters
Megha Lakshman
Posted by Megha Lakshman
Bengaluru (Bangalore), Hyderabad, Delhi
3 - 9 yrs
₹10L - ₹15L / yr
skill iconJava
skill iconPython
skill iconGo Programming (Golang)
skill iconRuby
skill iconC++
+6 more


Role
- Software Development Engineer-2

As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.



Top Skills

  • You write high quality, maintainable, and robust code, often in Java or C++ or C#
  • You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.
  • You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.
  • Roles & Responsibilities
  • You solve problems at their root, stepping back to understand the broader context.
  • You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.
  • You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.
  • You recognize and use design patterns to solve business problems.
  • You understand how operating systems work, perform and scale.
  • You continually align your work with Amazon’s business objectives and seek to deliver business value.
  • You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.
  • You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.
  • You communicate clearly with your team and with other groups and listen effectively.


Skills & Experience

  • Bachelors or Masters in Computer Science or relevant technical field.
  • Experience in software development and full product life-cycle.
  • Excellent programming skills in any object-oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.
  • Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
  • Proficiency in SQL and data modeling.



About Amazon.com

“Many of the problems we face have no textbook solution, and so we-happily-invent new ones.” – Jeff Bezos

Amazon.com – a place where builders can build. We hire the world's brightest minds and offer them an environment in which they can invent and innovate to improve the experience for our customers. A Fortune 100 company based in Seattle, Washington, Amazon is the global leader in e-commerce. Amazon offers everything from books and electronics to apparel and diamond jewelry. We operate sites in Australia, Brazil, Canada, China, France, Germany, India, Italy, Japan, Mexico, Netherlands, Spain, United Kingdom and United States, and maintain dozens of fulfillment centers around the world which encompass more than 26 million square feet.

Technological innovation drives the growth of Amazon, offering our customers more selection, convenient shopping, and low prices. Amazon Web Services provides developers and small to large businesses access to the horizontally scalable state of the art cloud infrastructure like S3, EC2, AMI, CloudFront and SimpleDB, that powers Amazon.com. Developers can build any type of business on Amazon Web Services and scale their application with growing business needs.

We want you to help share and shape our mission to be Earth's most customer-centric company. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of invention that is part of our DNA. We do this every day by inventing elegant and simple solutions to complex technical and business problems. We're making history and the good news is that we've only just begun.

 

 

 

About Amazon India

Amazon teams in India work on complex business challenges to innovate and create efficient solutions that enable various Amazon businesses, including Amazon websites across the world as well as support Payments, Transportation, and Digital products and services like the Kindle family of tablets, e-readers and the store. We are proud to have some of the finest talent and strong leaders with proven experience working to make Amazon the Earth’s most customer-centric company.

We made our foray into the Indian market with the launch of Junglee.com, enabling retailers in India to advertise their products to millions of Indian shoppers and drive targeted traffic to their stores. In June 2013, we launched http://www.amazon.in">www.amazon.in for shoppers in India. With http://www.amazon.in">www.amazon.in, we endeavor to give customers more of what they want – low prices, vast selection, fast and reliable delivery, and a trusted and convenient online shopping experience. In just over a year of launching our India operations, we have expanded our offering to over 18 million products across 36 departments and 100s of categories! Our philosophy of working backwards from the customers is what drives our growth and success.

We will continue to strive to become a trusted and meaningful sales and logistics channel for retailers of all sizes across India and a fast, reliable and convenient online shopping destination for consumers. For us, it is always “Day 1” and we are committed to aggressively invest over the long-term and relentlessly focus on raising the bar for customer experience in India.

Amazon India offers opportunities where you can dive right in, work with smart people on challenging problems and make an impact that contributes to the lives of millions. Join us so you can - Work Hard, Have Fun and Make History.

Read more
Amazon India

at Amazon India

1 video
58 recruiters
Srilalitha K
Posted by Srilalitha K
Hyderabad, Bengaluru (Bangalore), Delhi, Gurugram, Pune, Chennai
3 - 9 yrs
₹2L - ₹15L / yr
skill iconC
skill iconC++
skill iconC#
skill iconPython
skill icon.NET
+14 more

Software Development Engineer – SDE 2.            

 

As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.

 

 Top Skills

You write high quality, maintainable, and robust code, often in Java or C++ or C#

You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.

You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.

Roles & Responsibilities

You solve problems at their root, stepping back to understand the broader context.

You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.

You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.

You recognize and use design patterns to solve business problems.

You understand how operating systems work, perform and scale.

You continually align your work with Amazon’s business objectives and seek to deliver business value.

You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.

You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.

You communicate clearly with your team and with other groups and listen effectively.

 

Skills & Experience

Bachelors or Masters in Computer Science or relevant technical field.

Experience in software development and full product life-cycle.

Excellent programming skills in any object-oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.

Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.

Proficiency in SQL and data modeling.

Read more
Amazon India

at Amazon India

1 video
58 recruiters
Archana J
Posted by Archana J
Bengaluru (Bangalore), Hyderabad, Delhi, Pune, Chennai
2 - 9 yrs
₹10L - ₹15L / yr
skill iconJava
Data Structures
Algorithms
skill iconScala
skill iconC++
+4 more
Hi,

Please find below JD and do reply with updated resume if you are interested.

Software Development Engineer
Bengaluru / Hyderabad / Chennai / Delhi
As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.

Top Skills

• You write high quality, maintainable, and robust code, often in Java or C++.
• You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.
• You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.
Roles & Responsibilities

• You solve problems at their root, stepping back to understand the broader context.
• You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.
• You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.
• You recognize and use design patterns to solve business problems.
• You understand how operating systems work, perform and scale.
• You continually align your work with Amazon’s business objectives and seek to deliver business value.
• You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.
• You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.
• You communicate clearly with your team and with other groups and listen effectively.

Skills & Experience

• Bachelors or Masters in Computer Science or relevant technical field.
• Experience in software development and full product life-cycle.
• Excellent programming skills in any object oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.
• Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
• Proficiency in SQL and data modeling.



About Amazon.com

“Many of the problems we face have no textbook solution, and so we-happily-invent new ones.” – Jeff Bezos

Amazon.com – a place where builders can build. We hire the world's brightest minds and offer them an environment in which they can invent and innovate to improve the experience for our customers. A Fortune 100 company based in Seattle, Washington, Amazon is the global leader in e-commerce. Amazon offers everything from books and electronics to apparel and diamond jewelry. We operate sites in Australia, Brazil, Canada, China, France, Germany, India, Italy, Japan, Mexico, Netherlands, Spain, United Kingdom and United States, and maintain dozens of fulfillment centers around the world which encompass more than 26 million square feet.

Technological innovation drives the growth of Amazon, offering our customers more selection, convenient shopping, and low prices. Amazon Web Services provides developers and small to large businesses access to the horizontally scalable state of the art cloud infrastructure like S3, EC2, AMI, CloudFront and SimpleDB, that powers Amazon.com. Developers can build any type of business on Amazon Web Services and scale their application with growing business needs.

We want you to help share and shape our mission to be Earth's most customer-centric company. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of invention that is part of our DNA. We do this every day by inventing elegant and simple solutions to complex technical and business problems. We're making history and the good news is that we've only just begun.


About Amazon India

Amazon teams in India work on complex business challenges to innovate and create efficient solutions that enable various Amazon businesses, including Amazon websites across the world as well as support Payments, Transportation, and Digital products and services like the Kindle family of tablets, e-readers and the store. We are proud to have some of the finest talent and strong leaders with proven experience working to make Amazon the Earth’s most customer-centric company.

We made our foray into the Indian market with the launch of Junglee.com, enabling retailers in India to advertise their products to millions of Indian shoppers and drive targeted traffic to their stores. In June 2013, we launched www.amazon.in for shoppers in India. With www.amazon.in, we endeavor to give customers more of what they want – low prices, vast selection, fast and reliable delivery, and a trusted and convenient online shopping experience. In just over a year of launching our India operations, we have expanded our offering to over 18 million products across 36 departments and 100s of categories! Our philosophy of working backwards from the customers is what drives our growth and success.



We will continue to strive to become a trusted and meaningful sales and logistics channel for retailers of all sizes across India and a fast, reliable and convenient online shopping destination for consumers. For us, it is always “Day 1” and we are committed to aggressively invest over the long-term and relentlessly focus on raising the bar for customer experience in India.

Amazon India offers opportunities where you can dive right in, work with smart people on challenging problems and make an impact that contributes to the lives of millions. Join us so you can - Work Hard, Have Fun and Make History.

Thanks and Regards,
Regards,
Archana J
Recruiter (Tech) | Consumer TA
Read more
Amazon India

at Amazon India

1 video
58 recruiters
Nithya Nagarathinam
Posted by Nithya Nagarathinam
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Gurugram, India
3 - 9 yrs
₹1L - ₹15L / yr
skill iconJava
Data Structures
Algorithms
skill iconScala
skill iconC++
+6 more

Role- Software Development Engineer-2

As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.

Top Skills

You write high quality, maintainable, and robust code, often in Java or C++ or C#

You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.

You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.

Roles & Responsibilities

You solve problems at their root, stepping back to understand the broader context.

You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.

You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.

You recognize and use design patterns to solve business problems.

You understand how operating systems work, perform and scale.

You continually align your work with Amazon’s business objectives and seek to deliver business value.

You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.

You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.

You communicate clearly with your team and with other groups and listen effectively.

Skills & Experience

Bachelors or Masters in Computer Science or relevant technical field.

Experience in software development and full product life-cycle.

Excellent programming skills in any object-oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.

Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.

Proficiency in SQL and data modeling.

Read more
Amazon India

at Amazon India

1 video
58 recruiters
Sanjay Sriram
Posted by Sanjay Sriram
Bengaluru (Bangalore)
3 - 9 yrs
₹30L - ₹60L / yr
skill iconJava
Data Structures
Algorithms
skill iconScala
skill iconC++
+7 more

Role- Software Development Engineer-2

As a Software Development Engineer at Amazon, you have industry-leading technical abilities and demonstrate breadth and depth of knowledge. You build software to deliver business impact, making smart technology choices. You work in a team and drive things forward.


Top Skills

  • You write high quality, maintainable, and robust code, often in Java or C++ or C#
  • You recognize and adopt best practices in software engineering: design, testing, version control, documentation, build, deployment, and operations.
  • You have experience building scalable software systems that are high-performance, highly-available, highly transactional, low latency and massively distributed.
  • Roles & Responsibilities
  • You solve problems at their root, stepping back to understand the broader context.
  • You develop pragmatic solutions and build flexible systems that balance engineering complexity and timely delivery, creating business impact.
  • You understand a broad range of data structures and algorithms and apply them to deliver high-performing applications.
  • You recognize and use design patterns to solve business problems.
  • You understand how operating systems work, perform and scale.
  • You continually align your work with Amazon’s business objectives and seek to deliver business value.
  • You collaborate to ensure that decisions are based on the merit of the proposal, not the proposer.
  • You proactively support knowledge-sharing and build good working relationships within the team and with others in Amazon.
  • You communicate clearly with your team and with other groups and listen effectively.


Skills & Experience

  • Bachelors or Masters in Computer Science or relevant technical field.
  • Experience in software development and full product life-cycle.
  • Excellent programming skills in any object-oriented programming languages - preferably Java, C/C++/C#, Perl, Python, or Ruby.
  • Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
  • Proficiency in SQL and data modeling.


About Amazon.com

“Many of the problems we face have no textbook solution, and so we-happily-invent new ones.” – Jeff Bezos

Amazon.com – a place where builders can build. We hire the world's brightest minds and offer them an environment in which they can invent and innovate to improve the experience for our customers. A Fortune 100 company based in Seattle, Washington, Amazon is the global leader in e-commerce. Amazon offers everything from books and electronics to apparel and diamond jewelry. We operate sites in Australia, Brazil, Canada, China, France, Germany, India, Italy, Japan, Mexico, Netherlands, Spain, United Kingdom and United States, and maintain dozens of fulfillment centers around the world which encompass more than 26 million square feet.

Technological innovation drives the growth of Amazon, offering our customers more selection, convenient shopping, and low prices. Amazon Web Services provides developers and small to large businesses access to the horizontally scalable state of the art cloud infrastructure like S3, EC2, AMI, CloudFront and SimpleDB, that powers Amazon.com. Developers can build any type of business on Amazon Web Services and scale their application with growing business needs.

We want you to help share and shape our mission to be Earth's most customer-centric company. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of invention that is part of our DNA. We do this every day by inventing elegant and simple solutions to complex technical and business problems. We're making history and the good news is that we've only just begun.

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Noida
5 - 9 yrs
₹10L - ₹17L / yr
Data engineering
Spark
skill iconScala
Hadoop
Apache Hadoop
+1 more
  • We are looking for : Data engineer
  • Sprak
  • Scala
  • Hadoop
Exp - 5 to 9 years
N.p - 15 days to 30 Days
Location : Bangalore / Noida
Read more
Joopiterx

at Joopiterx

1 recruiter
Kumar Abhishek
Posted by Kumar Abhishek
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹25L / yr
skill iconScala
Akka

Our startup is building an AI based decentralized marketplace for the travel industry. Its founded by technologists, marketeers, and PhD/IITians from Google, Airbnb, Agoda etc.

We're looking to bring a Scala backend engineer with experience on reactive actor system using Akka/Akka-streams. The individual will be part of  our Europe based core team where individual team members have 20+ years experience each. Individual must've been hands on with scalable platform development and capable of individual contribution as well.

Read more
Product based company(Financial services)

Product based company(Financial services)

Agency job
via Qrata by Revathi Satish
Bengaluru (Bangalore)
4 - 8 yrs
₹30L - ₹35L / yr
skill iconPython
skill iconDjango
skill iconRuby
skill iconRuby on Rails (ROR)
skill iconScala
+2 more

Backend/Fullstack Developer

 

About the Company:

Company is India’s largest and best-established digital wealth management service that helps its customers create wealth for their long-and-short-term goals. Founded in 2012, Company is a pioneer in the digital financial services category and is recognised for creating simple and elegant user experiences in a complex domain. We do this by simplifying complex investing concepts and automating best practices, so our customers can grow their wealth without worry. We achieve this by combining cutting-edge technology, data-driven algorithms, awesome UX and friendly customer support. Our task is ambitious and we like to work hard as well as smart. We want to build a team that relishes challenges and contributes to a new way of thinking and investing in India. We are also invested in the growth of our colleagues and providing a supportive and thriving working environment for everyone. We have been recognised by Great Place To Work® as one of India’s best companies to work for.

Role specific info:

  • 4-12 years of experience building good quality production software
  • Excellent knowledge of at least one ecosystem based on Elixir/Phoenix, Ruby/Rails, Python/Django, Go/Scala/Clojure
  • Good OO skills, including strong design patterns knowledge
  • Familiar with datastores like MySQL, PostgreSQL, Redis, Redshift etc.
  • Familiarity with react.js/react-native, vue.js etc. • Knowledge of deploying software to AWS, GCP, Azure
  • Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration (CI)

 

What would you do here:

  • Writing quality code using language best practices.
  • Working in a highly collaborative team.
  • Building good software using the latest tools and techniques.
  • Participating in design reviews, coding modules, code reviews and unit testing.
  • Taking ownership of quality and usability of your code.
  • Mentoring co-workers.
  • Leading efforts in improving technology architecture.
Read more
Mobility Platform

Mobility Platform

Agency job
via zyoin by Suchoritha Zyoin
Bengaluru (Bangalore)
11 - 16 yrs
Best in industry
Engineering Management
Engineering manager
Engineering head
Technical Architecture
Technical lead
+8 more
  • Take the microservices architecture to the next level of scalability, efficiency, observability, and availability. 
  • Build, deploy & run multi-homed systems that work in multiple regions and cloud providers. 
  • Build (and open source) data processing, storage and fetch systems at the petabyte scale with the lowest cost/GB while still responding in milliseconds at the 99th percentile. 
  • Optimize algorithms which influence personalization, fulfillment/allocation, pricing, maps & routing, fleet positioning, payments, fraud prevention etc 
  • Create platforms, reusable libraries, and utilities wherever applicable 
  • Write high-quality code that is modular, functional and testable; Establish the best coding practices 
  • Formally mentor junior engineers on design, coding, and troubleshooting 
  • Troubleshoot issues effectively in a distributed architecture 
  • Communicate, collaborate and work effectively in a global environment 
  • Operationalize releases by partnering with Tech operations on capacity planning and operability of the product

 

Experience:

  • of years  - 10+ years
  • Type of experience - software design, development & architecture
  • Experience in Product companies working on Internet-scale applications is preferred 
  • Contribution to open-source software, tech blogs, talking at tech conferences, etc. 

 

 

Educational Qualifications:

○  Must have - Bachelor’s or Master’s degree in Engineering from premier institutes preferred  

 

Key competencies:

 

  •  Deep understanding of one or more of Java/Go/Scala/C++. Ability to understand and critique the core library/language constructs. 
  • Knowledge of processor, memory, network and storage internals. 
  • Deep understanding of distributed systems including fault modeling, concurrency, isolation, consensus etc. 
  • Internals of a RDBMS like MySQL 
  • Conversant with the internals of systems like Kafka, Cassandra/Scylla, Redis, RocksDB, etc 
  • Working knowledge of hosting and network infrastructure (K8s, Envoy, etc) 
  • Familiarity with binary serialization protocols like thrift/protobuf/flatbuffers etc 
  • Familiar with gRPC, HTTP/2, QUIC, etc. 
  • Troubleshooting memory issues, GC tuning, resource leaks etc. 
  • Strong problem-solving skills, algorithmic skills and data structures. 
  • Productionizing machine learning pipelines using Spark/Flink/TensorFlow etc 
Read more
Leading Sales Platform

Leading Sales Platform

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 10 yrs
₹30L - ₹45L / yr
Big Data
ETL
Spark
Data engineering
Data governance
+4 more
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Pune, Bengaluru (Bangalore), Hyderabad, Chennai
3 - 8 yrs
₹2L - ₹9L / yr
Data engineering
Data engineer
Spark
Apache Spark
Apache Kafka
+13 more

Responsibilities for Data Engineer

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:

  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Looking to hire Data Engineers for a client in Bangalore.

Looking to hire Data Engineers for a client in Bangalore.

Agency job
via Artifex HR by Maria Theyos
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹10L / yr
Big Data
Hadoop
Apache Spark
Spark
Apache Kafka
+11 more

We are looking for a savvy Data Engineer to join our growing team of analytics experts. 

 

The hire will be responsible for:

- Expanding and optimizing our data and data pipeline architecture

- Optimizing data flow and collection for cross functional teams.

- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.

- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates

- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.

Nice to have experience with :

- Big data tools: Hadoop, Spark and Kafka

- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow

- Stream-processing systems: Storm

Database : SQL DB

Programming languages : PL/SQL, Spark SQL

Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.

The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.

Read more
Amazon India

at Amazon India

1 video
58 recruiters
Neeta Singh Mehta
Posted by Neeta Singh Mehta
Bengaluru (Bangalore), Hyderabad, Noida, gurugram, Mumbai, Pune, NCR (Delhi | Gurgaon | Noida)
3 - 8 yrs
₹6L - ₹32L / yr
skill iconJava
skill iconPython
skill iconRuby
skill iconPHP
skill iconC++
+9 more

Greetings from Amazon...!                                       

 

It is our pleasure in personally inviting you to apply job with Amazon Development Centre India (ADCI). At Amazon we are inclined to hire people with passion for technology and you happened to be one of the shortlisted candidates. Our business is committed to recognizing potential and creating teams that embrace innovation.

 

Please find the Eligible criteria and requirements:

 

Job title                             :             SDE – II (Software Development Engineer)
Role Opportunity
            :             Permanent/Full Time/FTE/Regular

Work Location                 :             Hyderabad/Bangalore/ Gurgaon

 

Must Have

  • Strong Exposure to Data Structures, Algorithms, Coding, System Design (LLD, HLD, OOAD), Distributed systems, problem solving skills, Architecture (MVC/Microservices), logical thinking.

Amazon (ADCI) - If you are looking for an opportunity to solve deep technical problems and build innovative solutions in a fast paced environment working with smart, passionate software developers, this might be the role for you. Amazon’s transportation systems get millions of packages to customers worldwide faster and cheaper while providing world class customer experience – from checkout to shipment tracking to delivery. Our software systems include services that handle thousands or requests per second, make business decisions impacting billions of dollars a year, integrate with a network of small and large carriers worldwide, manage business rules for millions of unique products, and improve experience for millions of online shoppers. With rapid expansion into new geographies, innovations in supply chain, delivery models and customer experience, increasingly complex transportation network, ever expanding selection of products and growing number of shipments worldwide, we have an opportunity to build software that scales the business, leads the industry through innovation and delights millions of customers worldwide.

 

As an SDE, you will develop a deep understanding of our business, work closely with development teams and own the architecture and end-to-end delivery of software components.

About Amazon India:

Amazon teams in India work on complex business challenges to innovate and create efficient solutions that enable various Amazon businesses, including Amazon websites across the world as well as support Payments, Transportation, and Digital products and services like the Kindle family of tablets, e-readers and the store. We are proud to have some of the finest talent and strong leaders with proven experience working to make Amazon the Earth’s most customer-centric company.

We made our foray into the Indian market with the launch of Junglee.com, enabling retailers in India to advertise their products to millions of Indian shoppers and drive targeted traffic to their stores. In June 2013, we launched http://www.amazon.in">www.amazon.in for shoppers in India. With http://www.amazon.in">www.amazon.in, we endeavor to give customers more of what they want – low prices, vast selection, fast and reliable delivery, and a trusted and convenient online shopping experience. In just over a year of launching our India operations, we have expanded our offering to over 18 million products across 36 departments and 100s of categories! Our philosophy of working backwards from the customers is what drives our growth and success.

We will continue to strive to become a trusted and meaningful sales and logistics channel for retailers of all sizes across India and a fast, reliable and convenient online shopping destination for consumers. For us, it is always “Day 1” and we are committed to aggressively invest over the long-term and relentlessly focus on raising the bar for customer experience in India. Amazon India offers opportunities where you can dive right in, work with smart people on challenging problems and make an impact that contributes to the lives of millions. Join us so you can - Work Hard, Have Fun and Make History.

 

Basic Qualifications:

 

  • 3+ years’ experience building successful production software systems
  • A solid grounding in Computer Science fundamentals (based on a BS or MS in CS or related field)
  • The ability to take convert raw requirements into good design while exploring technical feasibility tradeoffs
  • Expertise in System design (design patterns, LLD, HLD, Solid principle, OOAD, Distributed systems etc..), Architecture (MVC/Micro services).
  • Good understanding of at least some of the modern programming languages (Java) and open-source technologies (C++, Python, Scala, C#, PHP, Ruby etc..)
  • Excellence in technical communication
  • Has experience in mentoring other software developers

 

Preferred Qualifications:

 

  • BS/MS in Computer Science or equivalent
  • Experience developing service oriented architectures and an understanding of design for scalability, performance and reliability
  • Demonstrated ability to mentor other software developers to maintain architectural vision and software quality
  • Demonstrated ability to achieve stretch goals in a highly innovative and fast paced environment
  • Expertise in delivering high-quality, innovative application
  • Strong desire to build, sense of ownership, urgency, and drive
  • Strong organizational and problem solving skills with great attention to detail
  • Ability to triage issues, react well to changes, work with teams and ability to multi-task on multiple products and projects.
  • Experience building highly scalable, high availability services
  • The ideal candidate will be a visionary leader, builder and operator.
  • He/she should have experience leading or contributing to multiple simultaneous product development efforts and initiatives.
  • He/she needs to balance technical leadership with strong business judgment to make the right decisions about technology choices.
  • He/she needs to be constantly striving for simplicity, and at the same time demonstrate significant creativity, innovation and judgment
  • Proficiency in, at least, one modern programming language.
  • Experience in SQL or Non-SQL database.
  • Strong sense of ownership, urgency, and drive.
  • Demonstrated leadership abilities in an engineering environment in driving operational excellence and best practices.
  • Demonstrated ability to achieve stretch goals in a highly innovative and fast paced environment.
  • Excellent communication, collaboration, reporting, analytical and problem solving skills.

 

 

Good to Have:

  • Knowledge of professional software engineering practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
  • Experience with enterprise-wide systems
  • Experience influencing software engineers best practices within your team
  • Hands-on expertise in many disparate technologies, typically ranging from front-end user interfaces through to back-end systems and all points in between
  • Strong written and verbal communication skills preferred

 

Key Points to remember:

 

  • Strong knowledge of the Software Development Life Cycle methodology
  • Technical design, development and implementation decisions on the use of technology in area(s) of specialization.
  • Write or modify programming code to suit customer's needs.
  • Unit test to assure meets requirements, including integration test as needed.
  • Ability to understand and analyze issues and uses judgment to make decisions.
  • Strong problem solving & troubleshooting skills
  • Strong communication skills
  • Responsible for self-development according to professional development plan

 

 

 

 

 

 

Read more
"Fintech product based company"

"Fintech product based company"

Agency job
via Anzy by Ismail shaikh
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
6 - 12 yrs
₹35L - ₹55L / yr
Engineering Management
Engineering manager
Engineering head
Technical Architecture
Technical lead
+7 more
Roles and Responsibilities:
 Mentor and guide a team of 7-10 top notch full stack engineers across various levels.
 Responsible for the product engineering road map, quality assurance and stability of the products.
 Responsible for right tech solutioning, architecture being deployed maintaining right balance in short term and
long term outcomes.
 Collaborate with Product managers and Business teams to develop long-term product roadmap and own the
release planning cycles.
 Defining career trajectory path for the engineers in the team. Performing regular performance evaluation and
sharing feedback.
 Operating with scale and speed amidst flux, there is just a LOT happening.
 Working with Head of Engineering to define best practices for development and champion their adoption and at
the same time Architect & design technically robust, flexible and scalable solutions.
 Perform well in uncertainties; collaborate and work with unclear interfaces to other teams in our fast paced
environment.
 Responsible for hiring and skills management within the team.
Desired Skills and Experience:
 BS/ MS in Computer Science from NIT, REC, BIT’s or other top tier colleges.
 7-8 years of total work experience in high scale web based products with reputed companies and 1-2 years of
work experience in managing engineering teams and project delivery, preferably in a start up environment.
 Strong hands on experience in at least one object oriented language (C++/Java) and one dynamic scripting
language (e.g. Ruby, Python, GoLang) is a must.
 Sound understanding of web technologies (e.g. Javascript, HTML5, CSS) and strong command over databases
(MySQL, PostgreSQL, Redis, ElasticSearch).
 Knowledge of MVC frameworks like Rails, Django, Symphony, Yii is desired.
 Prior experience with Google Cloud Platform would be preferred.
 Excellent written and oral communication skills.
 Excellent personal. People & communication skills
 Experience in organisation wide initiatives and change management.
 Ability to make quick decisions in high pressure environments with limited information.
Read more
A logistic Company

A logistic Company

Agency job
via Anzy by Dattatraya Kolangade
Bengaluru (Bangalore)
5 - 7 yrs
₹18L - ₹25L / yr
Data engineering
ETL
SQL
Hadoop
Apache Spark
+13 more
Key responsibilities:
• Create and maintain data pipeline
• Build and deploy ETL infrastructure for optimal data delivery
• Work with various including product, design and executive team to troubleshoot data
related issues
• Create tools for data analysts and scientists to help them build and optimise the product
• Implement systems and process for data access controls and guarantees
• Distill the knowledge from experts in the field outside the org and optimise internal data
systems
Preferred qualifications/skills:
• 5+ years experience
• Strong analytical skills

____ 04

Freight Commerce Solutions Pvt Ltd. 

• Degree in Computer Science, Statistics, Informatics, Information Systems
• Strong project management and organisational skills
• Experience supporting and working with cross-functional teams in a dynamic environment
• SQL guru with hands on experience on various databases
• NoSQL databases like Cassandra, MongoDB
• Experience with Snowflake, Redshift
• Experience with tools like Airflow, Hevo
• Experience with Hadoop, Spark, Kafka, Flink
• Programming experience in Python, Java, Scala
Read more
Blok

at Blok

1 recruiter
Shoa Khan
Posted by Shoa Khan
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹25L / yr
skill iconPostgreSQL
skill iconJava
skill iconScala
skill iconC++
skill iconJavascript
+1 more

Who We Are

Getir is a technology company, a pioneer of the online ultra-fast grocery delivery service business, that has transformed the way in which millions of people across the world consume groceries. We believe in a world where getting everything you need, when you need it, sustainably, is the new normal. 

Getir is growing incredibly fast in Europe, but we want to grow globally. From London to Tokyo, Sao Paulo to New York, our global ambitions can only be accomplished with exceptional technology.

If you've got the experience and the ambition to be a Database Administrator (Postgres) at Getir and  a founding part of our technology hub in Bangalore, please apply.


What you’ll be doing:

  • Work with engineering and other teams to build & maintain our database requirements and answer big data questions
  • Use your past DBA experience and industry wide best practices to scale and optimize the database services.
  • Regularly conduct database health monitoring and diagnostics
  • Create processes to ensure data integrity and identify potential data errors.
  • Document and update procedures and processes.
  • Troubleshoot and resolve problems as they arise

What we look for in you:

  • You have A Bachelor’s degree in Computer Science, Computer Engineering, Data Science, or another related field.
  • 3+ years of experience as a Database Administrator
  • Proficiency administering PostgreSQL
  • Extensive experience performing general troubleshooting database maintenance activities including backup and recovery, capacity planning, and managing user accounts.
  • Experience in identifying and documenting risk areas and mitigation strategies for process and procedure activities.
  • Experience in managing schemas, indexing, objects, and partitioning the tables.
  • Experience in managing system configurations.
  • Experience creating data design models, database architecture, and data repository design.
  • Strong understanding of SQL tuning and optimization of query plans.
  • Linux shell scripting skills and experience with production Linux environments
  • Experience working with software engineers in a highly technical environment.
  • Knowledge of 1+ programming language (e.g. C++, Scala, Java, JavaScript etc.)
  • Excellent verbal and written communication skills.
  • Knowledge administering MongoDB (Good to have)
  • Knowledge administering Amazon RDS for PostgreSQL & Redshift. (Good to have)
Read more
Jivox Software India Pvt Ltd
Kiranmai Banisetty
Posted by Kiranmai Banisetty
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹19L / yr
skill iconScala
skill iconJava
Data Structures
Algorithms
skill iconAmazon Web Services (AWS)

Your Opportunity

  • Own and drive business features into tech requirements
  • Design & develop large scale real time server side systems
  • Quickly create quality prototypes
  • Staying updated on emerging technologies
  • Ensuring that all deliverables adhere to our world class standards
  • Promote coding best practices
  • Mentor and develop junior developers in the team

Required Experience

  • 4+ years of relevant experience as described below
  • Excellent grasp of Core Java, Multi Threading and OO design patterns 
  • Experience with Scala, functional, reactive programming and Akka/Play is a plus
  • Excellent understanding of data structures and algorithms
  • Solid grasp of large scale distributed real time systems
  • Prior experience on building a scalable and resilient micro service
  • Solid understanding of relational databases, NoSQL databases and Caching systems
  • Good understanding of Big Data technologies such as Spark, Hadoop is a plus
  • Experience on one of AWS, Azure or GCP 

Who you are

  • You have excellent and effective communication and collaborative skills
  • You love problem solving
  • You stay up to date with the latest technologies and then apply them in real life
  • You love paying attention to detail
  • You thrive in meeting tight deadlines and prioritising workloads
  • Ability to collaborate across multiple functions

Education

Bachelor’s degree in Engineering or equivalent experience within the field

Read more
K RIDE

at K RIDE

2 recruiters
Pratiksha A
Posted by Pratiksha A
Bengaluru (Bangalore)
0 - 1 yrs
₹2L - ₹2.4L / yr
skill iconC++
skill iconJava
skill iconGo Programming (Golang)
Data Structures
Algorithms
+3 more
Quick Ride Hiring for Software Engineer.

Qualification : B.E/B.Tech/MCA with 70% aggregate and passed out in 2020 /2021

Salary : Rs. 2.4LPA starting revised half yearly based on performance.

Skills : Programming with C++/JAVA/SWIFT/Angular/QA (Anyone)

Job Location : Marathahalli, Bangalore
Read more
Leading Logistics-tech Platform

Leading Logistics-tech Platform

Agency job
via Unnati by Rakhi Gayen
Bengaluru (Bangalore)
1 - 3 yrs
₹14L - ₹18L / yr
skill iconJava
Software Development
Logistics
skill iconRuby
Clojure
+4 more
Work with a new-age, reliable logistics platform aiming to disrupt on-time delivery with ultimate efficiency! Read more.
 
Our client is a leading intra-city delivery solutions provider, that focuses on sorting out the largely unorganised logistics space in the country. It is also an aggregator of inter-city mini trucks and large transport vehicles for the Retail, Ecommerce and FMCG sectors. Their app is a platform used by their clients and truck owners, providing GPS enable vehicles, 24X7 support, economical pricing and multi-capacity loaders. Truckers can use their location and choose their transport jobs, while the companies get to pick the drivers as per their ratings.
 
With a fleet of over 44000 trucks and clients like Britannia, Bisleri, Amazon, Flipkart, Metro CashnCarry, Gati, Delhivery and more, the 5-year old platform has raised over $20Mn across multiple funding rounds. Founded and led by IIT-KG alumni, the company has operations in major cities across the country and looking to make inroads in other sectors and verticals.
 
As a Software Development Engineer -1, you will solve complex and interesting problems, converting design into code fluently.
 
What you will do:
  • Working with Databases and Linux platform
  • Understanding algorithms, databases and their space and time complexities
  • Writing unit and integration tests with reasonable coverage of code and interfaces
  • Solving complex and interesting problems
  • Taking up a high level of ownership and commitment towards the business and product vision
 

What you need to have:

  • Minimum 1-year experience
  • Strong problem-solving skills
  • Good understanding of data structures & algorithms and their space & time complexities
  • Strong hands-on and practical working experience with at least one programming language: C/Java/C++/C#
  • Excellent coding skills – should be able to convert the design into code fluently
  • Strong technical aptitude and a good knowledge of CS fundamentals
  • Hands-on experience working with Databases and Linux platform is a plus
  • B-Tech in Computer Science or equivalent from a reputed college
  • Good experience in at least one general programming language (Java, Ruby, Clojure, Scala, C/C++, Python and SQL)
  • A solid foundation in computer science, with strong competencies in data structures, algorithms, and software design.
  • Have a penchant for solving complex and interesting problems, Worked in startup like environment with high levels of ownership and commitment
  • Excellent coding skills – should be able to convert design into code fluently
  • Good skills to write unit & integration tests with reasonable coverage of code & interfaces
  • TDD is a plus
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort