Cutshort logo
Amazon Glacier Jobs in Chennai

11+ Amazon Glacier Jobs in Chennai | Amazon Glacier Job openings in Chennai

Apply to 11+ Amazon Glacier Jobs in Chennai on CutShort.io. Explore the latest Amazon Glacier Job opportunities across top companies like Google, Amazon & Adobe.

icon
netmedscom

at netmedscom

3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
5 - 10 yrs
₹10L - ₹30L / yr
skill iconMachine Learning (ML)
Software deployment
CI/CD
Cloud Computing
Snow flake schema
+19 more

We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.

 

Skills:

- 5+ years deploying Machine Learning pipelines in large enterprise production systems.

- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.

 

Roles and Responsibilities:

Deploying ML models into production, and scaling them to serve millions of customers.

Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.

Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.

Provide software design and programming support to projects.

 

 Qualifications & Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.

 

Read more
Indian Based IT Service Organization

Indian Based IT Service Organization

Agency job
via People First Consultants by Aishwarya KA
Chennai, Tirunelveli
5 - 7 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Greetings!!!!


We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location


Required Education/Experience


● Bachelor’s degree in computer Science or related field

● 5-7 years’ experience in the following:

● Snowflake, Databricks management,

● Python and AWS Lambda

● Scala and/or Java

● Data integration service, SQL and Extract Transform Load (ELT)

● Azure or AWS for development and deployment

● Jira or similar tool during SDLC

● Experience managing codebase using Code repository in Git/GitHub or Bitbucket

● Experience working with a data warehouse.

● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML

● Exposure to working in an agile work environment


Read more
one-to-one, one-to-many, and many-to-many

one-to-one, one-to-many, and many-to-many

Agency job
via The Hub by Sridevi Viswanathan
Chennai
5 - 10 yrs
₹1L - ₹15L / yr
AWS CloudFormation
skill iconPython
PySpark
AWS Lambda

5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.

2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.

Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.

Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.

Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.

Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.

Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.

Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.

Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.

Experience or exposure to design and development using Full Stack tools.

Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.

Bachelor's or master's degree in computer science or related field.

 

 

Read more
A fast growing Big Data company

A fast growing Big Data company

Agency job
via Careerconnects by Kumar Narayanan
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
skill iconDjango
skill iconFlask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
iLink Systems

at iLink Systems

1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Bengaluru (Bangalore)
5 - 15 yrs
₹14L - ₹25L / yr
PowerBI
Data storage
Data Structures
Algorithms
Data Lake
+2 more
Job Description
The Azure Data Engineer is responsible for building, implementing and supporting Microsoft BI solutions to meet market and/or client requirements. They apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity.
 
We are currently looking for programmers or experienced programmers who have good technical expertise in Azure Data Lake, Azure Synapse and Power BI reporting. As part of a collaborative team and under the supervision of a head of project, he/she will be responsible of designing and developing software products to implement new features and support current applications.
 
 
Responsibilities:
- Create ER diagrams and write relational database queries
- Create database objects and maintain referential integrity
- Configure, deploy and maintain database
- Participate in development and maintenance of Data warehouses
- Design, develop and deploy packages
- Creating and deploying reports
- Provide technical design, coding assistance to the team to accomplish the project deliverables as planned/scoped.

Requirements

 
Required Skills:
- Atleast 3 years of experience in Azure Data Lake Storage
- Atleast 3 years of experience in Azure Synapse Pipelines
- Atleast 3 years of experience in Power BI
- Atleast 3 years of experience in Azure Machine Learning
- Atleast 3 years of experience in Azure Databricks
- Should be well versed with Data Structures & algorithms
- Understanding of software development lifecycle
- Excellent analytical and problem-solving skills.
- Ability to work independently as a self-starter, and within a team environment.
- Good Communication skills- Written and Verbal
Read more
ValueLabs

at ValueLabs

1 video
1 recruiter
Agency job
via Saiva System by SARVDEV SINGH
Mumbai, Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Chandigarh, Ahmedabad, Vadodara, Surat, Kolkata, Chennai
5 - 8 yrs
₹6L - ₹18L / yr
PowerBI
PL/SQL
PySpark
Data engineering
Big Data
+2 more
we are hiring for valuelabs
Senior Software Engineer
MUST HAVE:
POWER BI with PLSQL
experience: 5+ YEARS
cost: 18 LPA
WHF- HYBRID
Read more
Bungee Tech India
Abigail David
Posted by Abigail David
Remote, NCR (Delhi | Gurgaon | Noida), Chennai
5 - 10 yrs
₹10L - ₹30L / yr
Big Data
Hadoop
Apache Hive
Spark
ETL
+3 more

Company Description

At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering. 

 

We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.

 

Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

You will also be responsible for integrating them with the architecture used in the company.

 

We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.

 

Responsibilities

As an experienced member of the team, in this role, you will:

 

  • Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development

 

  • You will research, design and code, troubleshoot and support. What you create is also what you own.

 

  • Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.

 

  • Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.

 

BASIC QUALIFICATIONS

  • Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
  • 5+ years relevant professional experience in Data Engineering and Business Intelligence
  • 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
  • Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
  • Ability to effectively communicate with both business and technical teams.
  • Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
  • Understanding of relational and non-relational databases and basic SQL
  • Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script

 

PREFERRED QUALIFICATIONS

 

  • Experience with building data pipelines from application databases.
  • Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
  • Experience working with Data Lakes.
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Sharp problem solving skills and ability to resolve ambiguous requirements
  • Experience on working with Big Data
  • Knowledge and experience on working with Hive and the Hadoop ecosystem
  • Knowledge of Spark
  • Experience working with Data Science teams
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Toyota Connected

at Toyota Connected

5 recruiters
Suganya R
Posted by Suganya R
Chennai
8 - 12 yrs
₹20L - ₹35L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
OpenCV

About Toyota Connected

If you want to change the way the world works, transform the automotive industry and positively impact others on a global scale, then Toyota Connected is the right place for you! Within our collaborative, fast-paced environment we focus on continual improvement and work in a highly iterative way to deliver exceptional value in the form of connected products and services that wow and delight our customers and the world around us.

 

About the Team

Toyota Connected India is hiring talented engineers at Chennai to use Deep Learning, Computer vision, Big data, high performance cloud-based services and other cutting-edge technologies to transform the customer experience with their vehicle. Come help us re-imagine what mobility can be today and for years to come!


 

Job Description

The Toyota Connected team is looking for a Senior ML Engineer (Computer Vision) to be a part of a highly talented engineering team to help create new products and services from the ground up for the next generation connected vehicles. We are looking for team members that are required to be creative in solving problems, excited to work in new technology areas and be ready to wear multiple hats to get things done in a highly energized, fast-paced, innovative and collaborative startup environment.


 

What you will do

• Develop solutions using Machine Learning/Deep Learning and other advanced technologies to solve a variety of problems

• Develop image analysis algorithms and deep learning architectures to solve Computer vision related problems.

• Implement cutting edge machine learning techniques in image classification, object detection, semantic segmentation, sequence modeling, etc. using frameworks such as OpenCV, TensorFlow and Pytorch.

• Translate user stories and business requirements to technical solutions by building quick prototypes or proof of concepts with several business and technical stakeholder groups in both internal and external organizations

• Partner with leaders in the area and have insights to select off the shelf components vs building from the scratch

• Convert the proof of concepts to production-grade solutions that can scale for hundreds of thousands of users

• Be hands-on where required and lead from the front in following best practices in development and CI/CD methods

• Own delivery of features from top to bottom, from concept to code to production

• Develop tools and libraries that will enable rapid and scalable development in the future

You are a successful candidate if

• You are smart and can demonstrate it

• You have 8+ years of experience as software engineer with minimum 3 years hands-on experience delivering products or solutions that utilized Computer Vision

• Strong experience in deploying solutions to production, with hands-on experience in any public cloud environment (AWS, GCP or Azure)

• Excellent proficiency in Open CV or related computer vision frameworks and libraries

• Mathematical understanding of a variety of statistical learning algorithms (Reinforcement Learning, Supervised/Unsupervised, Graphical Models)

• Expertise in a variety of Deep Learning architectures including Residual Networks, RNN/CNN, Transformer, and Transfer Learning. And experience in delivering value using these in real production environments for real customers

• You have deep proficiency in Python and at least one other major programming language (C++, Java, Golang)

• You are very fluent in one or more ML tools/libraries like Tensorflow, Pytorch, Caffe, and/or Theano and have solved several real-life problems using these

• We think the knowledge acquired earning a degree in Computer Science or Math would be of great value in this position, but if you're smart and have the experience that backs up your abilities, for us, talent trumps degree every time


 

What is in it for you? 

• Top of the line compensation!

• You'll be treated like the professional we know you are and left to manage your own time and work load.

• Yearly gym membership reimbursement. & Free catered lunches. 

• No dress code! We trust you are responsible enough to choose what’s appropriate to wear for the day.

• Opportunity to build products that improves the safety and convenience of millions of customers. 
Our Core Values

  • Empathetic: We begin making decisions by looking at the world from the perspective of our customers, teammates, and partners.
  • Passionate: We are here to build something great, not just for the money. We are always looking to improve the experience of our millions of customers
  • Innovative: We experiment with ideas to get to the best solution. Any constraint is a challenge, and we love looking for creative ways to solve them.
  • Collaborative: When it comes to people, we think the whole is greater than its parts and that everyone has a role to play in the success!
  •  
Read more
LatentView Analytics
Bengaluru (Bangalore), Chennai
9 - 14 yrs
₹9L - ₹14L / yr
Data Structures
Business Development
skill iconData Analytics
Regression Testing
skill iconMachine Learning (ML)
+4 more
Required Skill Set: -5+ years of hands-on experience in delivering results-driven analytics solutions with proven business value - Great consulting and quantitative skills, detail-oriented approach, with proven expertise in developing solutions using SQL, R, Python or such tools - A background in Statistics / Econometrics / Applied Math / Operations Research would be considered a plus -Exposure to working with globally dispersed teams based out of India or other offshore locations Role Description/ Responsibilities: Be the face of LatentView in the client's organization and help define analytics-driven consulting solutions to business problems -Translate business problems into analytic solution requirements and work with the LatentView team to develop high-quality solutions "- Communicate effectively with client / offshore team to manage client expectations and ensure timeliness and quality of insights -Develop expertise in clients business and help translate that into increasingly high value-added advisory solutions to client -Oversee Project Delivery to ensure the team meets the quality, productivity and SLA objectives - Grow the Account in terms of revenue and the size of the team You should Apply if you want to: - Change the world with Math and Models: At the core, we believe that analytics can help drive business transformation and lasting competitive advantage. We work with a heavy mix of algorithms, analysis, large databases and ROI to positively transform many a client- business performance - Make a direct impact on business: Your contribution to delivering results-driven solutions can potentially lead to millions of dollars of additional revenue or profit for our clients - Thrive in a Fast-pace Environment: You work in small teams, in an entrepreneurial environment, and a meritorious culture that values speed, growth, diversity and contribution - Work with great people: Our selection process ensures that we hire only the very best, while more than 50% of our analysts and 90% of our managers are alumni/alumna of prestigious global institutions
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort