Cutshort logo
matplotlib Jobs in Hyderabad

11+ matplotlib Jobs in Hyderabad | matplotlib Job openings in Hyderabad

Apply to 11+ matplotlib Jobs in Hyderabad on CutShort.io. Explore the latest matplotlib Job opportunities across top companies like Google, Amazon & Adobe.

icon
Chennai, Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹4L - ₹13L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
NumPy
pandas
+3 more

 

Job Title : Analyst / Sr. Analyst – Data Science Developer - Python

Exp : 2 to 5 yrs

Loc : B’lore / Hyd / Chennai

NP: Candidate should join us in 2 months (Max) / Immediate Joiners Pref.

 

About the role:

 

We are looking for an Analyst / Senior Analyst who works in the analytics domain with a strong python background.

 

Desired Skills, Competencies & Experience:

 

•                     • 2-4 years of experience in working in the analytics domain with a strong python background.

•                     • Visualization skills in python with plotly, matplotlib, seaborn etc. Ability to create customized plots using such tools.

•                     • Ability to write effective, scalable and modular code. Should be able to understand, test and debug existing python project modules quickly and contribute to that.

•                     • Should be familiarized with Git workflows.

 

Good to Have:

•                     • Familiarity with cloud platforms like AWS, AzureML, Databricks, GCP etc.

•                     • Understanding of shell scripting, python package development.

•                     • Experienced with Python data science packages like Pandas, numpy, sklearn etc.

•                     • ML model building and evaluation experience using sklearn.

 

Read more
Technogen India PvtLtd

at Technogen India PvtLtd

4 recruiters
Mounika G
Posted by Mounika G
Hyderabad
11 - 16 yrs
₹24L - ₹27L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconAmazon Web Services (AWS)
SQL
+1 more

Daily and monthly responsibilities

  • Review and coordinate with business application teams on data delivery requirements.
  • Develop estimation and proposed delivery schedules in coordination with development team.
  • Develop sourcing and data delivery designs.
  • Review data model, metadata and delivery criteria for solution.
  • Review and coordinate with team on test criteria and performance of testing.
  • Contribute to the design, development and completion of project deliverables.
  • Complete in-depth data analysis and contribution to strategic efforts
  • Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.

 

Basic Qualifications

  • Bachelor’s degree.
  • 5+ years of data analysis working with business data initiatives.
  • Knowledge of Structured Query Language (SQL) and use in data access and analysis.
  • Proficient in data management including data analytical capability.
  • Excellent verbal and written communications also high attention to detail.
  • Experience with Python.
  • Presentation skills in demonstrating system design and data analysis solutions.


Read more
Hyderabad - Hybrid
Agency job
via Vmultiply solutions by Mounica Buddharaju
Hyderabad
5 - 8 yrs
₹10L - ₹15L / yr
airflow
Windows Azure
SQL
Airflow
skill iconPython

Airflow developer:

Exp: 5 to 10yrs & Relevant exp must be above 4 Years.

Work location: Hyderabad (Hybrid Model)



Job description:  

·        Experience in working on Airflow.

·        Experience in SQL, Python, and Object-oriented programming. 

·        Experience in the data warehouse, database concepts, and ETL tools (Informatica, DataStage, Pentaho, etc.).

·        Azure experience and exposure to Kubernetes. 

·        Experience in Azure data factory, Azure Databricks, and Snowflake. 

Required Skills: Azure Databricks/Data Factory, Kubernetes/Dockers, DAG Development, Hands-on Python coding.

Read more
[x]cube LABS

at [x]cube LABS

2 candid answers
1 video
Krishna kandregula
Posted by Krishna kandregula
Hyderabad
2 - 6 yrs
₹8L - ₹20L / yr
ETL
Informatica
Data Warehouse (DWH)
PowerBI
DAX
+12 more
  • Creating and managing ETL/ELT pipelines based on requirements
  • Build PowerBI dashboards and manage datasets needed.
  • Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
  • Build data cubes for real-time visualisation needs and CXO dashboards.


Required Tech Skills


  • Microsoft PowerBI & DAX
  • Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
  • Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory



Read more
A fast growing Big Data company
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
Product and Service based company
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Monarch Tractors India
Hyderabad
2 - 8 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconData Science
Algorithms
skill iconPython
skill iconC++
+10 more

Designation: Perception Engineer (3D) 

Experience: 0 years to 8 years 

Position Type: Full Time 

Position Location: Hyderabad 

Compensation: As Per Industry standards 

 

About Monarch: 

At Monarch, we’re leading the digital transformation of farming. Monarch Tractor augments both muscle and mind with fully loaded hardware, software, and service machinery that will spur future generations of farming technologies. 

With our farmer-first mentality, we are building a smart tractor that will enhance (not replace) the existing farm ecosystem, alleviate labor availability, and cost issues, and provide an avenue for competitive organic and beyond farming by providing mechanical solutions to replace harmful chemical solutions. Despite all the cutting-edge technology we will incorporate, our tractor will still plow, still, and haul better than any other tractor in its class. We have all the necessary ingredients to develop, build and scale the Monarch Tractor and digitally transform farming around the world. 

 

Description: 

We are looking for engineers to work on applied research problems related to perception in autonomous driving of electric tractors. The team works on classical and deep learning-based techniques for computer vision. Several problems like SFM, SLAM, 3D Image processing, multiple view geometry etc. Are being solved to deploy on resource constrained hardware. 

 

Technical Skills: 

  • Background in Linear Algebra, Probability and Statistics, graphical algorithms and optimization problems is necessary. 
  • Solid theoretical background in 3D computer vision, computational geometry, SLAM and robot perception is desired. Deep learning background is optional. 
  • Knowledge of some numerical algorithms or libraries among: Bayesian filters, SLAM, Eigen, Boost, g2o, PCL, Open3D, ICP. 
  • Experience in two view and multi-view geometry. 
  • Necessary Skills: Python, C++, Boost, Computer Vision, Robotics, OpenCV. 
  • Academic experience for freshers in Vision for Robotics is preferred.  
  • Experienced candidates in Robotics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply. 
  • Software development experience on low-power embedded platforms is a plus. 

 

Responsibilities: 

  • Understanding engineering principles and a clear understanding of data structures and algorithms. 
  • Ability to understand, optimize and debug imaging algorithms. 
  • Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform. 
  • Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents. 
  • Optimize runtime performance of designed models. 
  • Deploy models to production and monitor performance and debug inaccuracies and exceptions. 
  • Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives. 
  • Thrive in a fast-paced environment and can own the project end to end with minimum hand holding. 
  • Learn & adapt to new technologies & skillsets. 
  • Work on projects independently with timely delivery & defect free approach. 
  • Thesis focusing on the above skill set may be given more preference. 

 

What you will get: 

At Monarch Tractor, you’ll play a key role on a capable, dedicated, high-performing team of rock stars. Our compensation package includes a competitive salary, excellent health benefits commensurate with the role you’ll play in our success.  

 

Read more
El Corte Ingls
Saradhi Reddy
Posted by Saradhi Reddy
Hyderabad
3 - 7 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconR Programming
skill iconPython
View the profiles of professionals named Vijaya More on LinkedIn. There are 20+ professionals named Vijaya More, who use LinkedIn to exchange information, ideas, and opportunities.
Read more
Thinkdeeply

at Thinkdeeply

5 recruiters
Aditya Kanchiraju
Posted by Aditya Kanchiraju
Hyderabad
5 - 15 yrs
₹5L - ₹35L / yr
skill iconMachine Learning (ML)
skill iconR Programming
TensorFlow
skill iconDeep Learning
skill iconPython
+2 more

Job Description

Want to make every line of code count? Tired of being a small cog in a big machine? Like a fast-paced environment where stuff get DONE? Wanna grow with a fast-growing company (both career and compensation)? Like to wear different hats? Join ThinkDeeply in our mission to create and apply Enterprise-Grade AI for all types of applications.

 

Seeking an M.L. Engineer with high aptitude toward development. Will also consider coders with high aptitude in M.L. Years of experience is important but we are also looking for interest and aptitude. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.

 

Experience

10+ Years

 

Location

Bozeman/Hyderabad

 

Skills

Required Skills:

Bachelors/Masters or Phd in Computer Science or related industry experience

3+ years of Industry Experience in Deep Learning Frameworks in PyTorch or TensorFlow

7+ Years of industry experience in scripting languages such as Python, R.

7+ years in software development doing at least some level of Researching / POCs, Prototyping, Productizing, Process improvement, Large-data processing / performance computing

Familiar with non-neural network methods such as Bayesian, SVM, Adaboost, Random Forests etc

Some experience in setting up large scale training data pipelines.

Some experience in using Cloud services such as AWS, GCP, Azure

Desired Skills:

Experience in building deep learning models for Computer Vision and Natural Language Processing domains

Experience in productionizing/serving machine learning in industry setting

Understand the principles of developing cloud native applications

 

Responsibilities

 

Collect, Organize and Process data pipelines for developing ML models

Research and develop novel prototypes for customers

Train, implement and evaluate shippable machine learning models

Deploy and iterate improvements of ML Models through feedback

Read more
Woodcutter Film Technologies Pvt. Ltd.
Athul Krishnan
Posted by Athul Krishnan
Hyderabad
1 - 5 yrs
₹3L - ₹6L / yr
skill iconData Science
skill iconR Programming
skill iconPython
We're an early stage film-tech startup with a mission to empower filmmakers and independent content creators with data-driven decision-making tools. We're looking for a data person to join the core team. Please get in touch if you would be excited to join us on this super exciting journey of disrupting the film production and distribution business. We are currently collaborating with Rana Daggubatt's Suresh Productions, and work out of their studio in Hyderabad - so exposure and opportunities to work on real issues faced by the media industry will be in plenty.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort