Data Engineer

at Amagi Media Labs

DP
Posted by Rajesh C
icon
Bengaluru (Bangalore), Noida
icon
5 - 9 yrs
icon
₹10L - ₹17L / yr
icon
Full time
Skills
Data engineering
Spark
Scala
Hadoop
Apache Hadoop
Apache Spark
  • We are looking for : Data engineer
  • Sprak
  • Scala
  • Hadoop
Exp - 5 to 9 years
N.p - 15 days to 30 Days
Location : Bangalore / Noida
Read more

About Amagi Media Labs

Founded
2008
Type
Size
Stage
Profitable
About
Amagi enables TV networks, OTT platforms, and content owners to transition to cloud technologies for their playout, delivery and monetization needs.
Read more
Connect with the team
icon
Jiby Thomas
icon
Rajesh C
icon
Vaibhav Rajput
Company social profiles
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

American Multinational Retail Corp
Agency job
via Hunt & Badge Consulting Pvt Ltd by Chandramohan Subramanian
Chennai
2 - 5 yrs
₹5L - ₹15L / yr
Scala
Spark
Apache Spark

Should have Passion to learn and adapt new technologies, understanding,

solving/troubleshooting issues and risks, able to make informed decisions and ability to

lead the projects.

 

Your Qualifications

 

  • 2-5 Years’ Experience with functional programming
  • Experience with functional programming using Scala with Spark framework.
  • Strong understanding of Object-oriented programming, data structures and algorithms
  • Good experience in any of the cloud platforms (Azure, AWS, GCP) etc.,
  • Experience with distributed (multi-tiered) systems, relational databases and NoSql storage solutions
  • Desire to learn new technologies and languages
  • Participation in software design, development, and code reviews
  • High level of proficiency with Computer Science/Software Engineering knowledge and contribution to the technical skills growth of other team members


Your Responsibility

 

  • Design, build and configure applications to meet business process and application requirements
  • Proactively identify and communicate potential issues and concerns and recommend/implement alternative solutions as appropriate.
  • Troubleshooting & Optimization of existing solution

 

Provide advice on technical design to ensure solutions are forward looking and flexible for potential future requirements and business needs.
Read more
at bipp
DP
Posted by Vish Josh
Remote only
2 - 8 yrs
₹5L - ₹14L / yr
SQL
Apache Spark
Python
PySpark
Hadoop
+2 more
Do NOT apply if you are :
  • If you are not serious about joining us, the Interview process shouldn't be a waste of time. 
  • Want to be a Power Bi, Qlik, or Tableau only developer.
  • A machine learning aspirant
  • A data scientist
  • Wanting to write only Python scripts
  • Want to do AI
  • Want to do 'BIG' data
  • Want to do HADOOP
  • Fresh Graduate
Apply if you :
  • Write SQL and Python for complicated analytical queries.
  • Understand existing business problems of the client and map their needs to the schema that they have.
  • Can neatly disassemble the problem into components and solve the needs by using SQL.
  • Have worked on existing BI products.
Mention 'awesome sql' when applying or in the screening question so that we know you have read the job post. 

Your role:
  • Develop solutions with our exciting new BI product for our clients.
  • You should be very experienced and comfortable with writing SQL against very complicated schema to help answer business questions.
  • Have an analytical thought process.
Read more
Bengaluru (Bangalore)
6 - 12 yrs
Best in industry
MongoDB
Spark
Hadoop
Big Data
Data engineering
+5 more
What You'll Do:
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies

The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages

Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
Read more
MNC
Agency job
via Fragma Data Systems by Harpreet kour
Bengaluru (Bangalore)
5 - 9 yrs
₹16L - ₹20L / yr
Apache Hadoop
Hadoop
Apache Hive
HDFS
SSL
+1 more
  • Responsibilities
         - Responsible for implementation and ongoing administration of Hadoop
    infrastructure.
         - Aligning with the systems engineering team to propose and deploy new
    hardware and software environments required for Hadoop and to expand existing
    environments.
         - Working with data delivery teams to setup new Hadoop users. This job includes
    setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig
    and MapReduce access for the new users.
         - Cluster maintenance as well as creation and removal of nodes using tools like
    Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
         - Performance tuning of Hadoop clusters and Hadoop MapReduce routines
         - Screen Hadoop cluster job performances and capacity planning
         - Monitor Hadoop cluster connectivity and security
         - Manage and review Hadoop log files.
         - File system management and monitoring.
         - Diligently teaming with the infrastructure, network, database, application and
    business intelligence teams to guarantee high data quality and availability
         - Collaboration with application teams to install operating system and Hadoop
    updates, patches, version upgrades when required.
        
    READ MORE OF THE JOB DESCRIPTION 
    Qualifications
    Qualifications
         - Bachelors Degree in Information Technology, Computer Science or other
    relevant fields
         - General operational expertise such as good troubleshooting skills,
    understanding of systems capacity, bottlenecks, basics of memory, CPU, OS,
    storage, and networks.
         - Hadoop skills like HBase, Hive, Pig, Mahout
         - Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs,
    monitor critical parts of the cluster, configure name node high availability, schedule
    and configure it and take backups.
         - Good knowledge of Linux as Hadoop runs on Linux.
         - Familiarity with open source configuration management and deployment tools
    such as Puppet or Chef and Linux scripting.
         Nice to Have
         - Knowledge of Troubleshooting Core Java Applications is a plus.

Read more
DP
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
DP
Posted by Vishal Pednekar
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
Data engineering
Big Data
Data Engineer
Amazon Web Services (AWS)
NOSQL Databases
+1 more
  • Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
  • Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
  • Experience Apache Spark Programming with Databricks
  • Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
  • Hands on experience with leveraging CI/CD to rapidly build & test application code
  • Expertise in Data governance and Data Quality
  • Experience working with PCI Data and working with data scientists is a plus
  • At least 4+ years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
  • 5+ years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies
Read more
DP
Posted by Nataliia Mediana
Remote only
3 - 15 yrs
$50K - $80K / yr
Data Science
Data Scientist
Data engineering
Financial analysis
Finance
+8 more

We are a nascent quantitative hedge fund led by an MIT PhD and Math Olympiad medallist, offering opportunities to grow with us as we build out the team. Our fund has  world class investors and big data experts as part of the GP,  top-notch ML experts as advisers to the fund, plus has equity funding to grow the team, license data and scale the data processing.

We are interested in researching and taking in live a variety of quantitative strategies based on historic and live market data, alternative datasets, social media data (both audio and video) and stock fundamental data.

You would join, and, if qualified, lead a growing team of data scientists and researchers, and be responsible for a complete lifecycle of quantitative strategy implementation and trading.

Requirements:

  • Atleast 3 years of relevant ML experience
  • Graduation date : 2018 and earlier
  •   3-5 years of experience in high level Python programming.
  • Master Degree (or Phd) in quantitative disciplines such as Statistics, Mathematics, Physics, Computer Science in top universities.
  •   Good knowledge of applied and theoretical statistics, linear algebra and machine learning techniques. 
  •   Ability to leverage financial and statistical insights to research, explore and harness a large collection of quantitative strategies and financial datasets in order to build strong predictive models.
  • Should take ownership for the research, design, development and implementation of the strategy development and effectively communicate with other team mates
  •   Prior experience and good knowledge of lifecycle and pitfalls of algorithmic strategy development and modelling. 
  •   Good practical knowledge in understanding financial statements, value investing, portfolio and risk management techniques.
  •   A proven ability to lead and drive innovation to solve challenges and road blocks in project completion.
  • A valid Github profile with some activity in it

Bonus to have:

  •   Experience in storing and retrieving data from large and complex time series databases
  •   Very good practical knowledge on time-series modelling and forecasting (ARIMA, ARCH and Stochastic modelling)
  •   Prior experience in optimizing and back testing quantitative strategies, doing return and risk attribution, feature/factor evaluation. 
  •   Knowledge of AWS/Cloud ecosystem is an added plus (EC2s, Lambda, EKS, Sagemaker etc.) 
  •   Knowledge of REST APIs and data extracting and cleaning techniques 
  •   Good to have experience in Pyspark or any other big data programming/parallel computing
  •   Familiarity with derivatives, knowledge in multiple asset classes along with Equities.
  •   Any progress towards CFA or FRM is a bonus
  • Average tenure of atleast 1.5 years in a company
Read more
Jaipur
4 - 6 yrs
₹13L - ₹16L / yr
Python
SQL
MySQL
Data Visualization
R
+3 more

About Us
Punchh is the leader in customer loyalty, offer management, and AI solutions for offline and omni-channel merchants including restaurants, convenience stores, and retailers. Punchh brings the power of online to physical brands by delivering omni-channel experiences and personalization across the entire customer journey--from acquisition through loyalty and growth--to drive same store sales and customer lifetime value. Punchh uses best-in-class integrations to POS and other in-store systems such as WiFi, to deliver real-time SKU-level transaction visibility and offer provisioning for physical stores.


Punchh is growing exponentially, serves 200+ brands that encompass 91K+ stores globally.  Punchh’s customers include the top convenience stores such as Casey’s General Stores, 25+ of the top 100 restaurant brands such as Papa John's, Little Caesars, Denny’s, Focus Brands (5 of 7 brands), and Yum! Brands (KFC, Pizza Hut, and Taco Bell), and retailers.  For a multi-billion $ brand with 6K+ stores, Punchh drove a 3% lift in same-store sales within the first year.  Punchh is powering loyalty programs for 135+ million consumers. 

Punchh has raised $70 million from premier Silicon Valley investors including Sapphire Ventures and Adam Street Partners, has a seasoned leadership team with extensive experience in digital, marketing, CRM, and AI technologies as well as deep restaurant and retail industry expertise.


About the Role: 

Punchh Tech India Pvt. is looking for a Senior Data Analyst – Business Insights to join our team. If you're excited to be part of a winning team, Punchh is a great place to grow your career.

This position is responsible for discovering the important trends among the complex data generated on Punchh platform, that have high business impact (influencing product features and roadmap). Creating hypotheses around these trends, validate them with statistical significance and make recommendations


Reporting to: Director, Analytics

Job Location: Jaipur

Experience Required: 4-6 years


What You’ll Do

  • Take ownership of custom data analysis projects/requests and work closely with end users (both internal and external clients) to deliver the results
  • Identify successful implementation/utilization of product features and contribute to the best-practices playbook for client facing teams (Customer Success)
  • Strive towards building mini business intelligence products that add value to the client base
  • Represent the company’s expertise in advanced analytics in a variety of media outlets such as client interactions, conferences, blogs, and interviews.

What You’ll Need

  • Masters in business/behavioral economics/statistics with a strong interest in marketing technology
  • Proven track record of at least 5 years uncovering business insights, especially related to Behavioral Economics and adding value to businesses
  • Proficient in using the proper statistical and econometric approaches to establish the presence and strength of trends in data. Strong statistical knowledge is mandatory.
  • Extensive prior exposure in causal inference studies, based on both longitudinal and latitudinal data.
  • Excellent experience using Python (or R) to analyze data from extremely large or complex data sets
  • Exceptional data querying skills (Snowflake/Redshift, Spark, Presto/Athena, to name a few)
  • Ability to effectively articulate complex ideas in simple and effective presentations to diverse groups of stakeholders.
  • Experience working with a visualization tool (preferably, but not restricted to Tableau)
  • Domain expertise: extensive exposure to retail business, restaurant business or worked on loyalty programs and promotion/campaign effectiveness
  • Should be self-organized and be able to proactively identify problems and propose solutions
  • Gels well within and across teams, work with stakeholders from various functions such as Product, Customer Success, Implementations among others
  • As the stakeholders on business side are based out of US, should be flexible to schedule meetings convenient to the West Coast timings
  • Effective in working autonomously to get things done and taking the initiatives to anticipate needs of executive leadership
  • Able and willing to relocate to Jaipur post pandemic.

Benefits:

  • Medical Coverage, to keep you and your family healthy.
  • Compensation that stacks up with other tech companies in your area.
  • Paid vacation days and holidays to rest and relax.
  • Healthy lunch provided daily to fuel you through your work.
  • Opportunities for career growth and training support, including fun team building events.
  • Flexibility and a comfortable work environment for you to feel your best.
 
Read more
Anywhere, United States, Canada
3 - 10 yrs
₹2L - ₹10L / yr
Java
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
+2 more
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Read more
DP
Posted by Suchita Upadhyay
Mumbai
2 - 4 yrs
₹6L - ₹15L / yr
Big Data
Hadoop
MySQL
MongoDB
YARN
Job Title: Software Developer – Big Data Responsibilities We are looking for a Big Data Developer who can drive innovation and take ownership and deliver results. • Understand business requirements from stakeholders • Build & own Mintifi Big Data applications • Be heavily involved in every step of the product development process, from ideation to implementation to release. • Design and build systems with automated instrumentation and monitoring • Write unit & integration tests • Collaborate with cross functional teams to validate and get feedback on the efficacy of results created by the big data applications. Use the feedback to improve the business logic • Proactive approach to turn ambiguous problem spaces into clear design solutions. Qualifications • Hands-on programming skills in Apache Spark using Java or Scala • Good understanding about Data Structures and Algorithms • Good understanding about relational and non-relational database concepts (MySQL, Hadoop, MongoDB) • Experience in Hadoop ecosystem components like YARN, Zookeeper would be a strong plus
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Amagi Media Labs?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort