We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
About netmedscom
Similar jobs
Job Description:
We are looking for an exceptional Data Scientist Lead / Manager who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes of daily data for various use cases.
Location: Pune (Initially remote due to COVID 19)
*****Looking for someone who can start immediately / Within a month. Hands-on experience in Python programming (Minimum 5 Years) is a must.
About the Organisation :
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
Qualifications:
• 8+ years relevant working experience
• Master / Bachelors in computer science or engineering
• Working knowledge of Python and SQL
• Experience in time series data, data manipulation, analytics, and visualization
• Experience working with large-scale data
• Proficiency of various ML algorithms for supervised and unsupervised learning
• Experience working in Agile/Lean model
• Experience with Java and Golang is a plus
• Experience with BI toolkit such as Tableau, Superset, Quicksight, etc is a plus
• Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Dask, Tensorflow, PyTorch, Keras, GCP ML Stack
• Exposure to modern Big Data tech such as Cassandra/Scylla, Kafka, Ceph, Hadoop, Spark
• Exposure to IAAS platforms such as AWS, GCP, Azure
Typical persona: Data Science Manager/Architect
Experience: 8+ years programming/engineering experience (with at least last 4 years in Data science in a Product development company)
Type: Hands-on candidate only
Must:
a. Hands-on Python: pandas,scikit-learn
b. Working knowledge of Kafka
c. Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
d. Good on analytical & debugging skills
e. Strong communication skills
Desired (in order of priorities)
a.Go (Strong advantage)
b. Airflow (Strong advantage)
c. Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
d. Data structures, Algorithms
e. Experience with multi-threaded and thread sync concepts
f. AWS Sagemaker
g. Keras
Job Description:
- 3 - 4 years of hands-on Python programming & libraries like PyData, Pandas
- Exposure to Mongo DB
- Experience in writing Unit Test cases
- Expertise in writing medium/advanced SQL Database queries
- Strong Verbal/Written communication skills
- Ability to work with onsite counterpart teams
We are helping hire for an early-stage Stealth Mode funded gaming startup that is looking for a dynamic User Experience Designer to join their talented team!
The frontend engineer will relish the subtle interaction details that make the products delightful, write highly reusable code and think about how to build better systems. Product intuition is as much a necessity as technical knowledge.
# WHAT YOU'LL DO:
• Build user interfaces that are beautiful, consistent, and fast.
• Invent patterns and reusable components that our team can assemble to build powerful software workflows.
• Take sole ownership of your product(s) - keep a keen eye out for bugs that might arise, passionately resolve them; make feature additions to your product; or sometimes, when you're feeling ambitious - rewrite the whole product from scratch! (Don't make this a habit though.)
• Ensure the technical feasibility of UI/UX designs
• Optimise application for maximum speed and scalability
# SKILLS YOU SHOULD HAVE:
• 3+ years of experience in building and shipping products that people use every day
• Prior startup experience is mandatory. (Brownie points for someone who has worked in the gaming industry)
• Be able to write clean, maintainable code which others can work on
• Should have working knowledge on frontend frameworks and tools- NodeJS expertise is a bonus
• Have knowledge of browser behavior, performance, compatibility and cross-browser issues
• Proficient understanding of code versioning tools, such as Git
• You take pride in working on advanced CSS, animations and responsiveness
• Ability to own end to end responsibility - right from requirement to release
• Ability to produce bug-free and production grade code
• The skills that we consider - CSS, JavaScript, React, Redux, Node.js, LESS, SASS, Bootstrap, Express.
• Candidates from Top tier colleges will be preferred.
LOCATION - Gurugram (Udyog Vihar)
The company has a wide distribution of networks locally and globally with its current 650 owned centres covering more than 40,000 locations. They have established themselves as a priority express service provider in the e-commerce B2C deliveries market with its customised services and strong footprint.
They are listed on NSE and BSE, and have 2900 highly skilled employees working across 650 offices all over India.
What you will do:
- Timely preparation and implementation of training calendar on monthly and annual basis
- Identifying the training needs of the employees and nominating for the suitable programs
- Timely nominations for all training programs (Internal & External) and making arrangements/ communication/ coordination
- Organizing induction training programs
- Conducting training programs for front line employees by aligning organizations’ learning strategy with business objectives
- Preparing monthly and annual training MIS and maintaining record of all trainings in HRMS/ERP
What you must have:
- Strong communication and presentation skills
- Ability to research and develop training material
- Excellent time management, problem solving, organization and leadership skills
- Good command over MS office tool
- Knowledge of various tools/ aids used in training
Introduction
We are looking to empanel a Paid Ads Specialist to lead and execute performance marketing strategies and campaign plans for our clients. We work with exciting brands and organizations that we have fallen hopelessly in love with. To these partnerships, we bring truckloads of passion and oodles of weirdness. To top it all off, we ensure everything we do is meaningful.
We are looking for teammates who are:
- As 'weird' as possible - someone who is itching to change the status quo, to take risks, to build a story that everyone talks about, and to make everyone sit up and take notice
- Always keeping an eye out for never compromising on 'meaningfulness' - every campaign, every strategy, every creative idea should never be superficial beyond a degree and should work towards genuinely and meaningfully connecting with audiences
“Weird + Meaningful” is our core belief, and we are craving to find people who really get it. You can know more about this in this https://www.youtube.com/watch?v=OcIGKxsDVpY">Ted talk by our founder
Responsibilities
- Developing and managing digital prospecting and remarketing campaigns
- Managing budgets and campaigns across all digital channels to drive strong return on investment and efficient CACs
- Ensuring successful planning, execution, optimization for key traffic KPIs via paid, organic & owned media channels
- Should be comfortable handling all types of campaigns on Google, Instagram, Facebook, YouTube, LinkedIn, and Amazon
- Should be able to audit, analyze, and advise on the creation and optimization of landing pages for conversion
- Should be able to set up pixels and guide/support the client in CRM integration for lead tracking
- Identifying, testing, and developing capabilities of executing campaigns on new channels to continue to meet or exceed established critical metrics
- Conduct A/B and other tests and work closely with the lead, the designers, and copywriters to ensure impactful optimization of campaigns
- Working closely with the management to share funnel conversion improvement ideas, feedback & present results
- Suggest new tricks in the industry, keep track of developments and discover new tools that would drive more results
Knowledge & Skills
- Proven relevant work experience of at least 3 years
- You have a bachelor's degree in Marketing, Business Administration, or related fields (preferred)
- It is a plus if you also have experience in executing effective multi-channel marketing campaigns, including affiliate marketing, PPC, SEO, social media, and other digital channels
- You have solid expertise in campaign and channel analysis and reporting, including Google Analytics experience
- You have experience in setting up pixels, optimizing landing pages, and guiding/supporting the client in CRM integration
- You possess excellent analytical skills and leverage data, metrics, analytics, and consumer behaviour trends to drive actionable insights & recommendations
- You are a highly goal-oriented individual and have excellent communication skills
- You are open-minded, curious, and a strong problem solver
- Developing telemetry software to connect Junos devices to the cloud
- Fast prototyping and laying the SW foundation for product solutions
- Moving prototype solutions to a production cloud multitenant SaaS solution
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics specialists to strive for greater functionality in our data systems.
Qualification and Desired Experiences
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- 5+ years experiences building data pipelines for data science-driven solutions
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Good team worker with excellent interpersonal skills written, verbal and presentation
- Create and maintain optimal data pipeline architecture,
- Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
- Previous work in a start-up environment
- 3+ years experiences building data pipelines for data science-driven solutions
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and interpersonal skills.
- Experience supporting and working with multi-functional teams in a multidimensional environment.
Write and post technical job descriptions
Source potential candidates on different platforms, like Naukri, cut short, LinkedIn
Parse specialized skills and qualifications to screen IT resumes
Perform pre-screening calls to analyze applicants’ abilities
Interview candidates combining various methods (e.g. structured interviews, technical assessments, and behavioral questions)
Coordinate with IT team leaders to forecast department goals and hiring needs
Craft and send personalized recruiting emails with current job openings to passive candidates
Participate in tech conferences and meetups to network with IT professionals
Compose job offer letters
Onboard new hires
Promote the company’s reputation as a great place to work
Conduct job and task analyses to document job duties and requirements