Scala Jobs in Kolkata
at HCL Technologies
Skill- Spark and Scala along with Azure
Location - Pan India
Looking for someone Bigdata along with Azure
Construction Tech Start-up
Our client is a well-funded construction Tech Start-up by a renowned group.
- Gather intelligence from key business leaders about needs and future growth
- Partner with the internal IT team to ensure each project meets a specific need and resolves successfully
- Assume responsibility for project tasks and ensure they are completed in a timely fashion
- Evaluate, test and recommend new opportunities for enhancing our software, hardware and IT processes
- Compile and distribute reports on application development and deployment
- Design and execute A/B testing procedures to extract data from test runs
- Evaluate and conclude data related to customer behavior
- Consult with the executive team and the IT department on the newest technology and its implications in the industry
- Bachelor's Degree in Software Development, Computer Engineering, Project Management or a related field
- 3+ years experience in technology development and deployment
at Teesta Investment Private Limited
We are looking for a savvy Data Engineer who will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support cross-functional teams on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next-generation data initiatives.
Responsibilities for Data Engineer
- You will design, build, set up and maintain some of the best data pipelines from scratch
- Translate complex business requirements into scalable technical solutions meeting data design standards. Strong understanding of analytics needs and proactive-ness to build generic solutions to improve the efficiency
- Collaborate with multiple cross-functional teams
- Create and maintain optimal data pipeline architecture.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL.
- Build analytics tools that utilize the data pipeline to provide actionable insights as required.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure.
Qualifications for Data Engineer
- Expertise in Designing Data Pipelines, MPP Platforms & Hadoop Systems
- Proven Track Record in Database Technologies - Management, Retrieval & Reporting.
- Excellent Database skills in SQL, Elasticsearch & MongoDB
- Hands-on experience with Node.js, Python and AWS
- Excellent in handling WebSockets and data streaming.
- Specialized in Data Structures.
- Expected to perform well in a fast-paced environment, execute the tasks assigned, meet the production deadlines, and, at the same time, explore independently new innovative ideas.
Education & Experience
Bachelors degree in Technology or Equivalent with 2+ years of experience in the domain
What we have to offer?
- An exciting and challenging working environment with passionate and enthusiastic people with an entrepreneurial atmosphere.
- Work directly with the owner, lead your area of expertise, and hold keys to the most valuable asset of the company.
at Aureus Tech Systems
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
Top IT MNC
We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
at Avhan Technologies Pvt Ltd
Reputed MNC client of people first consultant
Experience: 6-9 Years
Location: Pan India
Assist in the establishment of internal and controls related to software asset management, governance and compliance
Handled Cloud – AWS, Amazon, GCP & Billing
Participated in license compliance audit.
Tracking/management of software license governance and compliance in accordance with enterprise policy, process, procedures and controls by internal staff and external service providers
Senior Software Engineer
POWER BI with PLSQL
experience: 5+ YEARS
cost: 18 LPA
at Spica Systems
- Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
- Good in Python is must.
- Experience in Apache Spark, Scala, Java and Delta Lake
- Experience in designing and implementing templated ETL/ELT data pipelines
- Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
- Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
- Log management and log monitoring using ELK/Grafana
- Git Hub Integration
Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java