Cutshort logo
Scala Developer
Information Solution Provider Company's logo

Scala Developer

Agency job
2 - 7 yrs
₹10L - ₹15L / yr
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
Skills
skill iconScala
Spark
Eclipse (IDE)
IntelliJ IDEA

 

Roles and Responsibilities

  • Good Hands-on exp. in Spark
  • Extensive exp. in Scala (IDE- Eclipse/IntelliJ)
  • Working knowledge to create and work on Maven Project. Scala Maven Plugin. POM File
  • Azure Data Factory, Azure Databricks
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Information Solution Provider Company

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Healthtech Startup
Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹30L / yr
Google Cloud Platform (GCP)
bigquery

Description: 

As a Data Engineering Lead at Company, you will be at the forefront of shaping and managing our data infrastructure with a primary focus on Google Cloud Platform (GCP). You will lead a team of data engineers to design, develop, and maintain our data pipelines, ensuring data quality, scalability, and availability for critical business insights. 


Key Responsibilities: 

1. Team Leadership: 

a. Lead and mentor a team of data engineers, providing guidance, coaching, and performance management. 

b. Foster a culture of innovation, collaboration, and continuous learning within the team. 

2. Data Pipeline Development (Google Cloud Focus): 

a. Design, develop, and maintain scalable data pipelines on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, and Dataprep.

b. Implement best practices for data extraction, transformation, and loading (ETL) processes on GCP. 

3. Data Architecture and Optimization: 

a. Define and enforce data architecture standards, ensuring data is structured and organized efficiently. 

b. Optimize data storage, processing, and retrieval for maximum 

performance and cost-effectiveness on GCP. 

4. Data Governance and Quality: 

a. Establish data governance frameworks and policies to maintain data quality, consistency, and compliance with regulatory requirements. b. Implement data monitoring and alerting systems to proactively address data quality issues. 

5. Cross-functional Collaboration: 

a. Collaborate with data scientists, analysts, and other cross-functional teams to understand data requirements and deliver data solutions that drive business insights. 

b. Participate in discussions regarding data strategy and provide technical expertise. 

6. Documentation and Best Practices: 

a. Create and maintain documentation for data engineering processes, standards, and best practices. 

b. Stay up-to-date with industry trends and emerging technologies, making recommendations for improvements as needed. 


Qualifications 

● Bachelor's or Master's degree in Computer Science, Data Engineering, or related field. 

● 5+ years of experience in data engineering, with a strong emphasis on Google Cloud Platform. 

● Proficiency in Google Cloud services, including BigQuery, Dataflow, Dataprep, and Cloud Storage. 

● Experience with data modeling, ETL processes, and data integration. ● Strong programming skills in languages like Python or Java. 

● Excellent problem-solving and communication skills. 

● Leadership experience and the ability to manage and mentor a team.


Read more
Bengaluru (Bangalore), Gurugram
1 - 7 yrs
₹4L - ₹10L / yr
skill iconPython
skill iconR Programming
SAS
Surveying
skill iconData Analytics
+2 more

Desired Skills & Mindset:


We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.


• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions

• Statistical programming software experience in SPSS and comfortable working with large data sets.

• R, Python, SAS & SQL are preferred but not a mandate

• Excellent time management skills

• Good written and verbal communication skills; understanding of both written and spoken English

• Strong interpersonal skills

• Ability to act autonomously, bringing structure and organization to work

• Creative and action-oriented mindset

• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged

• Ability to work under pressure and deliver on tight deadlines


Qualifications and Experience:


• Graduate degree in: Statistics/Economics/Econometrics/Computer

Science/Engineering/Mathematics/MBA (with a strong quantitative background) or

equivalent

• Strong track record work experience in the field of business intelligence, market

research, and/or Advanced Analytics

• Knowledge of data collection methods (focus groups, surveys, etc.)

• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,

and MS Office (Excel, PowerPoint, Word)

• Strong analytical and critical thinking skills

• Industry experience in Consumer Experience/Healthcare a plus

Read more
decisionfoundry
Christy Philip
Posted by Christy Philip
Remote only
2 - 5 yrs
Best in industry
Amazon Redshift
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
+1 more

Description

About us

Welcome to Decision Foundry!

We are both a high growth startup and one of the longest tenured Salesforce Marketing Cloud Implementation Partners in the ecosystem. Forged from a 19-year-old web analytics company, Decision Foundry is the leader in Salesforce intelligence solutions.

We win as an organization through our core tenets. They include:

  • One Team. One Theme.
  • We sign it. We deliver it.
  • Be Accountable and Expect Accountability.
  • Raise Your Hand or Be Willing to Extend it

Requirements

• Strong understanding of data management principles and practices (Preferred experience: AWS Redshift).

• Experience with Tableau server administration, including user management and permissions (preferred, not mandatory).

• Ability to monitor alerts and application logs for data processing issues and troubleshooting.

• Ability to handle and monitor support tickets queues and act accordingly based on SLAs and priority.

• Ability to work collaboratively with cross-functional teams, including Data Engineers and BI team.

• Strong analytical and problem-solving skills.

• Familiar with data warehousing concept and ETL processes.

• Experience with SQL, DBT and database technologies such as Redshift, Postgres, MongoDB, etc.

• Familiar with data integration tools such as Fivetran or Funnel.io

• Familiar with programming languages such as Python.

• Familiar with cloud-based data technologies such as AWS.

• Experience with data ingestion and orchestration tools such as AWS Glue.

• Excellent communication and interpersonal skills.

• Should possess experience of 2+ years.

Read more
Figment Global Solutions Pvt Ltd
Jaipur
8 - 12 yrs
₹7L - ₹10L / yr
SQL
Oracle SQL Developer
MySQL
skill iconMongoDB
JSON
+1 more

Senior Database (PL/SQL)

 

Work Experience: 8+ Years

Number of Vacancies: 2

Location:

CTC: As per industry standards

 

Job Position: Oracle PLSQL Developer.

 

Required: Oracle Certified Database Developer

 

Key Skills:

  • Must have basic knowledge in SQL Queries, Joins, DDL, DML, TCL, Types, Object, Collection Developer. Basic Oracle PLSQL programming experience (Procedures, packages, functions, exceptions.
  • Develop, implement, and optimize stored procedures and functions using PLSQL
  • Writing basic Queries, Package, Procedures, Functions, Triggers, Ref Cursors, Using Oracle 11g to 19c features, including triggers, stored procedures, queries, SQL Code, and design (stored procedures, functions, packages, tables, views, triggers, indexes, constraints, collections, bulk collects, etc..).
  • Must have basic knowledge in PL/SQL Developer tool.
  • Basic knowledge of MySql & Mongo DBA
  • Strong communication skills
  • Good interpersonal and teamwork skills
  • PL/SQL, stored, procedure, functions, trigger
  • Bulk Collection
  • Utl_file
  • Materilized View
  • Performance handling
  • Usage of Hint in Queries
  • JSON (json object, json table, json queries)
  • BLOB CLOB concept
  • External table
  • Dynamic SQL
Read more
SenecaGlobal
at SenecaGlobal
6 recruiters
Shiva V
Posted by Shiva V
Remote, Hyderabad
4 - 6 yrs
₹15L - ₹20L / yr
skill iconPython
PySpark
Spark
skill iconScala
Microsoft Azure Data factory
Should have good experience with Python or Scala/PySpark/Spark/
• Experience with Advanced SQL
• Experience with Azure data factory, data bricks,
• Experience with Azure IOT, Cosmos DB, BLOB Storage
• API management, FHIR API development,
• Proficient with Git and CI/CD best practices
• Experience working with Snowflake is a plus
Read more
MNC
at MNC
Agency job
via Fragma Data Systems by Harpreet kour
Bengaluru (Bangalore)
5 - 9 yrs
₹16L - ₹20L / yr
Apache Hadoop
Hadoop
Apache Hive
HDFS
SSL
+1 more
  • Responsibilities
         - Responsible for implementation and ongoing administration of Hadoop
    infrastructure.
         - Aligning with the systems engineering team to propose and deploy new
    hardware and software environments required for Hadoop and to expand existing
    environments.
         - Working with data delivery teams to setup new Hadoop users. This job includes
    setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig
    and MapReduce access for the new users.
         - Cluster maintenance as well as creation and removal of nodes using tools like
    Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
         - Performance tuning of Hadoop clusters and Hadoop MapReduce routines
         - Screen Hadoop cluster job performances and capacity planning
         - Monitor Hadoop cluster connectivity and security
         - Manage and review Hadoop log files.
         - File system management and monitoring.
         - Diligently teaming with the infrastructure, network, database, application and
    business intelligence teams to guarantee high data quality and availability
         - Collaboration with application teams to install operating system and Hadoop
    updates, patches, version upgrades when required.
        
    READ MORE OF THE JOB DESCRIPTION 
    Qualifications
    Qualifications
         - Bachelors Degree in Information Technology, Computer Science or other
    relevant fields
         - General operational expertise such as good troubleshooting skills,
    understanding of systems capacity, bottlenecks, basics of memory, CPU, OS,
    storage, and networks.
         - Hadoop skills like HBase, Hive, Pig, Mahout
         - Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs,
    monitor critical parts of the cluster, configure name node high availability, schedule
    and configure it and take backups.
         - Good knowledge of Linux as Hadoop runs on Linux.
         - Familiarity with open source configuration management and deployment tools
    such as Puppet or Chef and Linux scripting.
         Nice to Have
         - Knowledge of Troubleshooting Core Java Applications is a plus.

Read more
DataMetica
at DataMetica
1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
skill iconJava
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
The other Fruit
at The other Fruit
1 video
3 recruiters
Dipendra SIngh
Posted by Dipendra SIngh
Pune
1 - 5 yrs
₹3L - ₹15L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
Data Structures
Algorithms
+17 more
 
SD (ML and AI) job description:

Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
 
Read more
Bengaluru (Bangalore)
5 - 5 yrs
₹10L - ₹15L / yr
Data Analyst
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconData Science
Machine Learning
+1 more
  • Measure the sales effectiveness efforts using data science/app/digital nudges.
  • Should be able to work on the clickstream data
  • Should be well versed and willing to work hands-on various Machine Learning techniques

Skills

  • Ability to lead a team of 5-6 members.
  • Ability to work with large data sets and present conclusions to key stakeholders.
  • Develop a clear understanding of the client’s business issue to inform the best approach to the problem.
  • Root-cause analysis
  • Define data requirements for creating a model and understand the business problem
  • Clean, aggregate, analyze, interpret data and carry out quality analysis of it
  • Set up data for predictive/prescriptive analysis
  • Development of AI/ML models or statistical/econometric models.
  • Working along with the team members
  • Looking for insight and creating a presentation to demonstrate these insights
  • Supporting development and maintenance of proprietary marketing techniques and other knowledge development projects.
Read more
Rivet Systems Pvt Ltd.
at Rivet Systems Pvt Ltd.
1 recruiter
Shobha B K
Posted by Shobha B K
Bengaluru (Bangalore)
5 - 19 yrs
₹10L - ₹30L / yr
ETL
Hadoop
Big Data
Pig
Spark
+2 more
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig

To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos