Cutshort logo
Scala Jobs in Kolkata

Scala Jobs in Kolkata

Explore top Scala Job opportunities in Kolkata from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

at HCL Technologies

3 recruiters
Agency job
via Saiva System by Sunny Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Hyderabad, Chennai, Pune, Mumbai, Kolkata
5 - 10 yrs
₹5L - ₹20L / yr
Data engineering
Big Data
+2 more
Exp- 5 + years
Skill- Spark and Scala along with Azure
Location - Pan India

Looking for someone Bigdata along with Azure
Read more

Construction Tech Start-up

Agency job
via Merito by Jinita Sumaria
4 - 5 yrs
₹8L - ₹10L / yr
IT infrastructure
Business Development
Business Analysis
Business Intelligence (BI)
A/B Testing

About Company
Our client is a well-funded construction Tech Start-up by a renowned group.

- Gather intelligence from key business leaders about needs and future growth
- Partner with the internal IT team to ensure each project meets a specific need and resolves successfully
- Assume responsibility for project tasks and ensure they are completed in a timely fashion
- Evaluate, test and recommend new opportunities for enhancing our software, hardware and IT processes
- Compile and distribute reports on application development and deployment
- Design and execute A/B testing procedures to extract data from test runs
- Evaluate and conclude data related to customer behavior
- Consult with the executive team and the IT department on the newest technology and its implications in the industry

Requirements :
- Bachelor's Degree in Software Development, Computer Engineering, Project Management or a related field
- 3+ years experience in technology development and deployment

Read more
Posted by Mayank Bansal
2 - 5 yrs
₹7.5L - ₹12L / yr
Big Data
Data engineering
+6 more

Job Overview


We are looking for a savvy Data Engineer who will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support cross-functional teams on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next-generation data initiatives.


Responsibilities for Data Engineer


  • You will design, build, set up and maintain some of the best data pipelines from scratch
  • Translate complex business requirements into scalable technical solutions meeting data design standards. Strong understanding of analytics needs and proactive-ness to build generic solutions to improve the efficiency
  • Collaborate with multiple cross-functional teams
  • Create and maintain optimal data pipeline architecture.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL.
  • Build analytics tools that utilize the data pipeline to provide actionable insights as required.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure.


Qualifications for Data Engineer


  • Expertise in Designing Data Pipelines, MPP Platforms & Hadoop Systems
  • Proven Track Record in Database Technologies - Management, Retrieval & Reporting.
  • Excellent Database skills in SQL, Elasticsearch & MongoDB
  • Hands-on experience with Node.js, Python and AWS
  • Excellent in handling WebSockets and data streaming.
  • Specialized in Data Structures.
  • Expected to perform well in a fast-paced environment, execute the tasks assigned, meet the production deadlines, and, at the same time, explore independently new innovative ideas.


Education & Experience


Bachelors degree in Technology or Equivalent with 2+ years of experience in the domain


What we have to offer?


  • An exciting and challenging working environment with passionate and enthusiastic people with an entrepreneurial atmosphere.
  • Work directly with the owner, lead your area of expertise, and hold keys to the most valuable asset of the company.
Read more

at Aureus Tech Systems

3 recruiters
Posted by Naveen Yelleti
Kolkata, Hyderabad, Chennai, Bengaluru (Bangalore), Bhubaneswar, Visakhapatnam, Vijayawada, Trichur, Thiruvananthapuram, Mysore, Delhi, Noida, Gurugram, Nagpur
1 - 7 yrs
₹4L - ₹15L / yr
Data engineering
Big Data
+2 more

Skills and requirements

  • Experience analyzing complex and varied data in a commercial or academic setting.
  • Desire to solve new and complex problems every day.
  • Excellent ability to communicate scientific results to both technical and non-technical team members.


  • A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
  • Hands on experience on Python, Pyspark, SQL
  • Hands on experience on building End to End Data Pipelines.
  • Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
  • Hands on Experience in building data pipelines.
  • Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
  • Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
  • Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
  • BS degree in math, statistics, computer science or equivalent technical field.
  • Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
  • Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
  • Willing to learn and work on Data Science, ML, AI.
Read more


Agency job
Chennai, Bengaluru (Bangalore), Kochi (Cochin), Coimbatore, Hyderabad, Pune, Kolkata, Noida, Gurugram, Mumbai
5 - 13 yrs
₹8L - ₹20L / yr
Snow flake schema

We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
Read more
Posted by Aarti Vohra
7 - 10 yrs
₹8L - ₹20L / yr
SQL server
Microsoft Analysis Services
+3 more
Exp : 7 to 8 years
Notice Period: Immediate to 15 days
Job Location : Kolkata
• Develop and improve solutions spanning data processing activities from the data lake (stage) to star schemas and reporting view’s / tables and finally into SSAS.
• Develop and improve Microsoft Analysis Services cubes (tabular and dimensional)
• Collaborate with other teams within the organization and be able to devise the technical solution as it relates to the business & technical requirements
• Mentor team members and be proactive in training and coaching team members to develop their proficiency in Analysis Services
• Maintain documentation for all processes implemented
• Adhere to and suggest improvements to coding standards, applying best practices
• Proficient in MDX and DAX for query in SSAS
Read more

Reputed MNC client of people first consultant

Agency job
Chennai, Coimbatore, Bengaluru (Bangalore), Pune, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Kochi (Cochin), Kolkata
6 - 9 yrs
₹6L - ₹10L / yr
Data Warehouse (DWH)
Amazon Web Services (AWS)
+1 more
Designation: Informatica cloud(IICS)
6-9 Years
Pan India
Job Description

Must have: Work experience in Informatica Intelligent Cloud Services and SQL, Data analyst.
Roles and responsibilities
Tracking/management of all software assets to include internal license assessments
Participate in software vendor contract/license negotiations and the development of software licenses and associated maintenance contracts.
Knowledge in SW license procurement & hands on experience in Ariba tool.
Prepare and assist in the performance of periodic compliance report
Provide support to end users regarding specific vendor product use rights
Assist in the establishment of internal and controls related to software asset management, governance and compliance
Exposure in Oracle, MS, IBM, Adobe and other enterprise licensing.
Handled Cloud – AWS, Amazon, GCP & Billing
Participated in license compliance audit.
Tracking/management of software license governance and compliance in accordance with enterprise policy, process, procedures and controls by internal staff and external service providers
Required :
             Working knowledge in Remedy, Service Now or any IT Asset tool platform
             IT Asset Management and Discovery Tools experience
             Experience interpreting licensing terms and conditions
             Conception knowledge of Information Technology
Read more

at ValueLabs

1 video
1 recruiter
Agency job
via Saiva System by SARVDEV SINGH
Mumbai, Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Chandigarh, Ahmedabad, Vadodara, Surat, Kolkata, Chennai
5 - 8 yrs
₹6L - ₹18L / yr
Data engineering
Big Data
+2 more
we are hiring for valuelabs
Senior Software Engineer
experience: 5+ YEARS
cost: 18 LPA
Read more

at Spica Systems

1 recruiter
Posted by Priyanka Bhattacharya
3 - 5 yrs
₹7L - ₹12L / yr
Apache Spark
We are a Silicon Valley based start-up, established in 2019 and are recognized as experts in building products and providing R&D and Software Development services in wide range of leading-edge technologies such as LTE, 5G, Cloud Services (Public -AWS, AZURE,GCP,Private – Openstack) and Kubernetes. It has a highly scalable and secured 5G Packet Core Network, orchestrated by ML powered Kubernetes platform, which can be deployed in various multi cloud mode along with a test tool.Headquartered in San Jose, California, we have our R&D centre in Sector V, Salt Lake Kolkata.


  • Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
  • Good in Python is must.
  • Experience in Apache Spark, Scala, Java and Delta Lake
  • Experience in designing and implementing templated ETL/ELT data pipelines
  • Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
  • Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
  • Log management and log monitoring using ELK/Grafana
  • Git Hub Integration


Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java

Read more

at Alien Brains

5 recruiters
Posted by Praveen Baheti
0 - 15 yrs
₹4L - ₹8L / yr
Deep Learning
Machine Learning (ML)
Data Analytics
Data Science
+3 more
You'll be giving industry standard training to engineering students and mentoring them to develop their custom mini projects.
Read more
Posted by Kumar Aniket
Remote, Kolkata
0 - 4 yrs
₹3L - ₹7L / yr
Data Science
R Programming
We aim to transform recruiting industry.
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort