We are looking out for a technically driven "Full-Stack Engineer" for one of our premium client
COMPANY DESCRIPTION:
Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Similar jobs
Science)
Have 2 to 6 years of experience working in a similar role in a startup environment
SQL and Excel have no secrets for you
You love visualizing data with Tableau
Any experience with product analytics tools (Mixpanel, Clevertap) is a plus
You solve math puzzles for fun
A strong analytical mindset with a problem-solving attitude
Comfortable with being critical and speaking your mind
You can easily switch between coding (R or Python) and having a business
discussion
Be a team player who thrives in a fast-paced and constantly changing environment
THE ROLE:Sr. Cloud Data Infrastructure Engineer
As a Sr. Cloud Data Infrastructure Engineer with Intuitive, you will be responsible for building or converting legacy data pipelines from legacy environments to modern cloud environments to help the analytics and data science initiatives across our enterprise customers. You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud Data Infrastructure Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud Data Infrastructure Engineer is the main point of contact for all things related to DataLake formation and data at scale. In this role, we expect our DataOps leaders to be obsessed with data and providing insights to help our end customers.
ROLES & RESPONSIBILITIES:
- Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built
- Developing scalable and re-usable frameworks for ingesting large data from multiple sources.
- Modern Data Orchestration engineering - query tuning, performance tuning, troubleshooting, and debugging big data solutions.
- Provides technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources.
- Deep understanding of ETL/ELT design methodologies, patterns, personas, strategy, and tactics for complex data transformations.
- Data processing/transformation using various technologies such as spark and cloud Services.
- Understand current data engineering pipelines using legacy SAS tools and convert to modern pipelines.
Data Infrastructure Engineer Strategy Objectives: End to End Strategy
Define how data is acquired, stored, processed, distributed, and consumed.
Collaboration and Shared responsibility across disciplines as partners in delivery for progressing our maturity model in the End-to-End Data practice.
- Understanding and experience with modern cloud data orchestration and engineering for one or more of the following cloud providers - AWS, Azure, GCP.
- Leading multiple engagements to design and develop data logistic patterns to support data solutions using data modeling techniques (such as file based, normalized or denormalized, star schemas, schema on read, Vault data model, graphs) for mixed workloads, such as OLTP, OLAP, streaming using any formats (structured, semi-structured, unstructured).
- Applying leadership and proven experience with architecting and designing data implementation patterns and engineered solutions using native cloud capabilities that span data ingestion & integration (ingress and egress), data storage (raw & cleansed), data prep & processing, master & reference data management, data virtualization & semantic layer, data consumption & visualization.
- Implementing cloud data solutions in the context of business applications, cost optimization, client's strategic needs and future growth goals as it relates to becoming a 'data driven' organization.
- Applying and creating leading practices that support high availability, scalable, process and storage intensive solutions architectures to data integration/migration, analytics and insights, AI, and ML requirements.
- Applying leadership and review to create high quality detailed documentation related to cloud data Engineering.
- Understanding of one or more is a big plus -CI/CD, cloud devops, containers (Kubernetes/Docker, etc.), Python/PySpark/JavaScript.
- Implementing cloud data orchestration and data integration patterns (AWS Glue, Azure Data Factory, Event Hub, Databricks, etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake)
- Possessing a certification(s) in one of the following is a big plus - AWS/Azure/GCP data engineering, and Migration.
KEY REQUIREMENTS:
- 10+ years’ experience as data engineer.
- Must have 5+ Years in implementing data engineering solutions with multiple cloud providers and toolsets.
- This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale.
- Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers.
- Must have worked with multiple database technologies and patterns. Good SQL experience for writing complex SQL transformation.
- Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
- Nice to have Databricks administration including security and infrastructure features of Databricks.
- Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration
As a Data Architect, you work with business leads, analysts and data scientists to understand the business domain and manage data engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Collaborate with many teams - engineering and business, to build better data products
● Manage team and handle delivery of 2-3 projects
What We’re Looking For
● Over 4-6 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
● Hands-on experience in cloud platforms like AWS, GCP etc.
● Good communication skills and strong analytical skills
● Experience in team handling and project delivery
Data Analyst
at Extramarks Education India Pvt Ltd
Required Experience
· 3+ years of relevant technical experience as a data analyst role
· Intermediate / expert skills with SQL and basic statistics
· Experience in Advance SQL
· Python programming- Added advantage
· Strong problem solving and structuring skills
· Automation in connecting various sources to the data and representing it through various dashboards
· Excellent with Numbers and communicate data points through various reports/templates
· Ability to communicate effectively internally and outside Data Analytics team
· Proactively take up work responsibilities and take adhocs as and when needed
· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
· Strong technical communication skills; both written and verbal
· Ability to understand and articulate the "big picture" and simplify complex ideas
· Ability to identify and learn applicable new techniques independently as needed
· Must have worked with various Databases (Relational and Non-Relational) and ETL processes
· Must have experience in handling large volume and data and adhere to optimization and performance standards
· Should have the ability to analyse and provide relationship views of the data from different angles
· Must have excellent Communication skills (written and oral).
· Knowing Data Science is an added advantage
Required Skills
MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills
Preferred Experience
· Strong understanding of relational database MY SQL etc.
· Prior experience working remotely full-time
· Prior Experience working in Advance SQL
· Experience with one or more BI tools, such as Superset, Tableau etc.
· High level of logical and mathematical ability in Problem Solving
Senior Product Analyst
Senior Product Analyst
Pampers Start Up Team
India / Remote Working
Team Description
Our internal team focuses on App Development with data a growing area within the structure. We have a clear vision and strategy which is coupled up with App Development, Data, Testing, Solutions and Operations. The data team sits across the UK and India whilst other teams sit across Dubai, Lebanon, Karachi and various cities in India.
Role Description
In this role you will use a range of tools and technologies to primarily working on providing data design, data governance, reporting and analytics on the Pampers App.
This is a unique opportunity for an ambitious candidate to join a growing business where they will get exposure to a diverse set of assignments, can contribute fully to the growth of the business and where there are no limits to career progression and reward.
Responsibilities
● To be the Data Steward and drive governance having full understanding of all the data that flows through the Apps to all systems
● Work with the campaign team to do data fixes when issues with campaigns
● Investigate and troubleshoot issues with product and campaigns giving clear RCA and impact analysis
● Document data, create data dictionaries and be the “go to” person in understanding what data flows
● Build dashboards and reports using Amplitude, Power BI and present to the key stakeholders
● Carry out adhoc data investigations into issues with the app and present findings back querying data in BigQuery/SQL/CosmosDB
● Translate analytics into a clear powerpoint deck with actionable insights
● Write up clear documentation on processes
● Innovate with new processes or ways of providing analytics and reporting
● Help the data lead to find new ways of adding value
Requirements
● Bachelor’s degree and a minimum of 4+ years’ experience in an analytical role preferably working in product analytics with consumer app data
● Strong SQL Server and Power BI required
● You have experience with most or all of these tools – SQL Server, Python, Power BI, BigQuery.
● Understanding of mobile app data (Events, CTAs, Screen Views etc)
● Knowledge of data architecture and ETL
● Experience in analyzing customer behavior and providing insightful recommendations
● Self-starter, with a keen interest in technology and highly motivated towards success
● Must be proactive and be prepared to address meetings
● Must show initiative and desire to learn business subjects
● Able to work independently and provide updates to management
● Strong analytical and problem-solving capabilities with meticulous attention to detail
● Excellent problem-solving skills; proven teamwork and communication skills
● Experience working in a fast paced “start-up like” environment
Desirable
- Knowledge of mobile analytical tools (Segment, Amplitude, Adjust, Braze and Google Analytics)
- Knowledge of loyalty data
We are looking for
A Senior Software Development Engineer (SDE2) who will be instrumental in the design and development of our backend technology, which manages our exhaustive data pipelines and AI models. Simplifying complexity and building technology that is robust and scalable is your North Star. You'll work closely alongside our CTO and machine learning engineers, frontend and wider technical team to build new capabilities, focused on speed and reliability.
You'll own your work, to build, test and iterate quickly, with direct guidance from our CTO.
Please note: You must have proven industry experience greater than 2 years.
Your work includes
- Own and manage the whole engineering infrastructure that supports Greendeck platform.
- Work to create highly scalable, highly robust and highly available python micro-services.
- Design the architecture to stream data on a huge scale across multiple services.
- Create and manage data pipelines using tools like Kafka, Celery.
- Deploy Serverless functions to process and manage data.
- Work with variety of databases and storage systems to store and strategically manage data.
- Write connections to collect data from various third party services, data storages and APIs.
Skills/ Requirements
- Strong experience in python creating scripts or apps or services
- Strong automation and scripting skills
- Knowledge of at least one SQL and No-SQL Database
- Experience of working with messaging systems like Kafka, RabbitMQ
- Good knowledge about data-frames and data-manipulation
- Have used and deployed apps using FastAPI or Flask or similar tech
- Knowledge of CI/CD paradigm
- Basic knowledge about Docker
- Have knowledge of creating and using REST APIs
- Good knowledge of OOP Fundamentals.
- (Optional) Knowledge about Celery/ Airflow
- (Optional) Knowledge about Lambda/ Serverless
- (Optional) Have connected apps using OAuth
What you can expect
- Attractive pay, bonus scheme and flexible vacation policy.
- A truly flexible, trust-based, performance driven work culture.
- Lunch is on us, everyday!
- A young and passionate team building elegant products with intricate technology for the future of businesses around the world. Our average age is 25!
- The chance to make a huge difference to the success of a world-class SaaS product and the opportunity to make an impact.
Its important to us
- That you relocate to Indore
- That you have a minimum of 2 years of experience working as a Software Developer
Role Summary
We Are looking for an analytically inclined, Insights Driven Product Analyst to make our organisation more data driven. In this role you will be responsible for creating dashboards to drive insights for product and business teams. Be it Day to Day decisions as well as long term impact assessment, Measuring the Efficacy of different products or certain teams, You'll be Empowering each of them. The growing nature of the team will require you to be in touch with all of the teams at upgrad. Are you the "Go-To" person everyone looks at for getting Data, Then this role is for you.
Roles & Responsibilities
- Lead and own the analysis of highly complex data sources, identifying trends and patterns in data and provide insights/recommendations based on analysis results
- Build, maintain, own and communicate detailed reports to assist Marketing, Growth/Learning Experience and Other Business/Executive Teams
- Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
- Analyze data and generate insights in the form of user analysis, user segmentation, performance reports, etc.
- Facilitate review sessions with management, business users and other team members
- Design and create visualizations to present actionable insights related to data sets and business questions at hand
- Develop intelligent models around channel performance, user profiling, and personalization
Skills Required
- Having 4-6 yrs hands-on experience with Product related analytics and reporting
- Experience with building dashboards in Tableau or other data visualization tools such as D3
- Strong data, statistics, and analytical skills with a good grasp of SQL.
- Programming experience in Python is must
- Comfortable managing large data sets
- Good Excel/data management skills
Analytics Scientist - Risk Analytics
at market-leading fintech company dedicated to providing credit