About Us |
|
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
|
About upGrad
upGrad is an online higher education platform. Founded by Ronnie Screwvala, Mayank Kumar, Ravijot Chugh and Phalgun Kompalli in March’ 2015, upGrad provides rigorous industry-relevant programs designed and delivered in collaboration with world-class faculty and industry. Merging the latest technology, pedagogy, and services, upGrad is creating an immersive learning experience – anytime and anywhere.
Through exclusive partnerships with some of the most prominent universities like IIIT-Bangalore, MICA, BITS Pilani, ISB, Cambridge Judge Business School- our aim to impart university education, online.
Learning online can be tough, especially when you have to do it all by yourself. Reasons why you should upskill with UpGrad:
- We provide an engaging experience via our suite of learning applications right from the university applications till you get a job and transition
- We provide structured online courses in collaboration with some of the prominent universities and industry experts
- We co-create a rigorous curriculum in collaboration with these universities to provide the learners with a holistic learning experience
- All our courses are comprehensive, structured and rigorous - delivered online, providing you the flexibility and opportunity of continuous learning
- We conduct regular live lectures with the industry experts and the professors
- Each of our learners is allocated with a dedicated student mentor who helps them chart a career path and motivates them to push themselves
- We provide in-depth feedback on all the assignments, case studies, and projects
- We have delivered 400+ successful career transitions and we’re committed to building careers of tomorrow
- You get access to an alumni network of 3,000+ students across the globe
- We also conduct periodic offline events like Hackathons, Bootcamps, Alumni Nights and connect you not only to the professors and industry experts but the peers in your batch too
- Last but not the last, we provide career assistance and help all the learners with interview preparations, mentorship calls, and job placements even after the completion of the program
Similar jobs
Team:- We are a team of 9 data scientists working on Video Analytics Projects, Data Analytics projects for internal AI requirements of Reliance Industries as well for the external business. At a time, we make progress on multiple projects(atleast 4) in Video Analytics or Data Analytics.
Responsibilities: - Write and maintain production level code in Python for deploying machine learning models - Create and maintain deployment pipelines through CI/CD tools (preferribly GitLab CI) - Implement alerts and monitoring for prediction accuracy and data drift detection - Implement automated pipelines for training and replacing models - Work closely with with the data science team to deploy new models to production Required Qualifications: - Degree in Computer Science, Data Science, IT or a related discipline. - 2+ years of experience in software engineering or data engineering. - Programming experience in Python - Experience in data profiling, ETL development, testing and implementation - Experience in deploying machine learning models
Good to have: - Experience in AWS resources for ML and data engineering (SageMaker, Glue, Athena, Redshift, S3) - Experience in deploying TensorFlow models - Experience in deploying and managing ML Flow
Kindly note that candidates who have graduated in 2022 and 2023 only will be considered for the role who are based in Mumbai, immediate joiners
JD - Data Operations Analyst
What is the job and team like
- As a Data Operations Analyst we manage business reporting of numerous teams, constantly monitor performance
- Checking the integrity of the revenue reporting done by the different systems for the correct profitability to be
- reported to the CXOs
- Send reports periodically and alert stakeholders for changes in the key performance metrics
- Allocate efforts to different Business Implementations that help build Profit/Loss statement for the Financials.
- Track crucial data points which affect the core of the business and escalate it to senior stakeholders
Roles and Responsibilities
- Graduate in IT Background(BE/BSc IT/ BCA) 2022 and 2023 graduates only
- Executing a set of business processes daily/weekly/monthly as per Business requirement.
- Provide ad-hoc data support on any urgent reports and material in an expedited manner
- Maintain a list of open tasks and escalations, and send updates to the relevant stakeholders
- Have an eye for detail, should have the ability to look at numbers, spot trends and identify gaps
- Identify efficient and meaningful ways to communicate data and analysis through ongoing reports and dashboards
- Proficiency in SQL, Excel and any statistical and analytical tools such as SAS, SPSS is a big plus
- Managing master data, including creation, updates, and deletion.
- Ability to work in a fast paced, technical, cross functional environment
- Familiarity with Internet Industry and Online Advertising Business is a plus
Ideal candidate
- Import and export large volume of data to database tables as required
- Should be able to write Data Definition Language or Data Manipulation Language SQL commands
- Develop programs, methodologies to get analyzable data on a regular basis
- Good team player and multi-tasker
- Should have the ability to learn and adapt to change
- Self-starter Must be productive with minimal direction
- High-level written and verbal communication sk
Job Details
Work mode- In office
Must have skills - SQL, MS Excel, Communications
- Minimum 2.5 years of experience as a Python Developer.
- Minimum 2.5 years of experience in any framework like Django/Flask/Fast API
- Minimum 2.5 years of experience in SQL/ Postgress
- Minimum 2.5 years of experience in Git/Gitlab/Bit-Bucket
- Minimum 2+ years of experience in deployment (CICD with Jenkins)
- Minimum 2.5 years of experience in any cloud like AWS/GCP/Azure
- Design, build web crawlers to scrape data and URLs.
- Integrate the data crawled and scraped into our databases
- Create more/better ways to crawl relevant information
- Strong knowledge of web technologies (HTML, CSS, Javascript, XPath, Regex)
- Understanding of data privacy policies (esp. GDPR) and personally identifiable information
- Develop automated and reusable routines for extracting information from various data sources
- Prepare requirement summary and re-confirm with Operation team
- Translate business requirements into specific solutions
- Ability to relay technical information to non-technical users
- Demonstrate Effective problem solving and analytical skill
- Ability to pay attention to detail, pro-active, critical thinking and accuracy is essential
- Ability to work to deadlines and give realistic estimates
Skills & Expertise
- 2+ years of web scraping experience
- Experience with two or more of the following web scraping frameworks and tools: Selenium, Scrapy, Import.io, Webhose.io, ScrapingHub, ParseHub, Phantombuster, Octoparse, Puppeter, etc.
- Basic knowledge of data engineering (database ingestion, ETL, etc.)
- Solution orientation and "can do" attitude - with a desire to tackle complex problems.
• 2+ years of experience in data engineering & strong understanding of data engineering principles using big data technologies
• Excellent programming skills in Python is mandatory
• Expertise in relational databases (MSSQL/MySQL/Postgres) and expertise in SQL. Exposure to NoSQL such as Cassandra. MongoDB will be a plus.
• Exposure to deploying ETL pipelines such as AirFlow, Docker containers & Lambda functions
• Experience in AWS loud services such as AWS CLI, Glue, Kinesis etc
• Experience using Tableau for data visualization is a plus
• Ability to demonstrate a portfolio of projects (GitHub, papers, etc.) is a plus
• Motivated, can-do attitude and desire to make a change is a must
• Excellent communication skills
- Experience implementing large-scale ETL processes using Informatica PowerCenter.
- Design high-level ETL process and data flow from the source system to target databases.
- Strong experience with Oracle databases and strong SQL.
- Develop & unit test Informatica ETL processes for optimal performance utilizing best practices.
- Performance tune Informatica ETL mappings and report queries.
- Develop database objects like Stored Procedures, Functions, Packages, and Triggers using SQL and PL/SQL.
- Hands-on Experience in Unix.
- Experience in Informatica Cloud (IICS).
- Work with appropriate leads and review high-level ETL design, source to target data mapping document, and be the point of contact for any ETL-related questions.
- Good understanding of project life cycle, especially tasks within the ETL phase.
- Ability to work independently and multi-task to meet critical deadlines in a rapidly changing environment.
- Excellent communication and presentation skills.
- Effectively worked on the Onsite and Offshore work model.
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.
Client Name: Samsung
We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.
Responsibilities for Data Engineer
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
Experience building and optimizing big data ETL pipelines, architectures and data sets.
Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data
stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with big data tools: Spark, Kafka, HBase, Hive etc.
Experience with relational SQL and NoSQL databases
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Skills: Big Data, AWS, Hive, Spark, Python, SQL