We are looking for a Senior Database Developer to provide a senior-level contribution to design, develop and implement critical business enterprise applications for marketing systems
- Play a lead role in developing, deploying and managing our databases (Oracle, My SQL and Mongo) on Public Clouds.
- Design and develop PL/SQL processes to perform complex ETL processes.
- Develop UNIX and Perl scripts for data auditing and automation.
- Responsible for database builds and change requests.
- Holistically define the overall reference architecture and manage its overall implementation in the production systems.
- Identify architecture gaps that can improve availability, performance and security for both productions systems and database systems and works towards resolving those issues.
- Work closely with Engineering, Architecture, Business and Operations teams to provide necessary and continuous feedback.
- Automate all the manual steps for the database platform.
- Deliver solutions for access management, availability, security, replication and patching.
- Troubleshoot application database performance issues.
- Participate in daily huddles (30 min.) to collaborate with onshore and offshore teams.
- 5+ years of experience in database development.
- Bachelor’s degree in Computer Science, Computer Engineering, Math, or similar.
- Experience using ETL tools (Talend or Ab Initio a plus).
- Experience with relational database programming, processing and tuning (Oracle, PL/SQL, My SQL, MS SQL Server, SQL, TSQL).
- Familiarity with BI tools (Cognos, Tableau, etc.).
- Experience with Cloud technology (AWS, etc.).
- Agile or Waterfall methodology experience preferred.
- Experience with API integration.
- Advanced software development and scripting skills for use in automation and interfacing with databases.
- Knowledge of software development lifecycles and methodologies.
- Knowledge of developing procedures, packages and functions in a DW environment.
- Knowledge of UNIX, Linux and Service Oriented Architecture (SOA).
- Ability to multi-task, to work under pressure, and think analytically.
- Ability to work with minimal supervision and meet deadlines.
- Ability to write technical specifications and documents.
- Ability to communicate effectively with individuals at all levels in the company and with various business contacts outside of the company in an articulate, professional manner.
- Knowledge of CDP, CRM, MDM and Business Intelligence is a plus.
- Flexible work hours.
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
5 - 6 Years
Good Knowledge in Relation and Non-Relational Database
To write Complex Queries and Identify problematic queries and provide a Solution
Good Hands on database tools
Experience in Both SQL and NON SQL Database like SQL Server, PostgreSQL, Mango DB, Maria DB. Etc.
Worked on Data Model Preparation & Structuring Database etc.
Peliqan is a highly scalable and secure cloud solution for data collaboration in the modern data stack. We are on a mission to reinvent how data is shared in companies. We have offices in Ghent (Belgium) and Bengaluru (India).
After the initial R&D phase, we are now ready to build the first version of our product. We are recruiting a core tactical team with motivated people, true team players, to launch our disruptive offering.
As a cloud engineer & architect you will be responsible for the overall cloud infrastructure running on AWS, as well as owning an on-prem solution running on Docker/Kubernetes.
- Manage the overall AWS infrastructure (CloudOps, system administration)
- Define the architecture of both the cloud offering (SaaS) and the on-prem self-hosted solution for customers (Docker/Kubernetes)
- Define and implement CI/CD pipelines
- Implement automation for all aspects of the deployment of our environments
- Support the development team in setting up DevOps best practices
- Own the scalability, reliability and security of the overall platform
- Experienced in AWS cloud, Docker, Kubernetes, Terraform, Ansible, Jenkins (or similar tools)
- Great Linux system administration skills
- Good understanding of data pipelines, data warehouses (e.g. Redshift, Snowflake), data lakes (e.g. Databricks), ETL/ELT, pipeline orchestration (e.g. Airflow), Spark, query engines (e.g. Presto/Trino) because the Peliqan solution needs to fit in perfectly in this overall ecosystem
- Good automation skills
- Very good understanding of cloud infrastructures, networking, firewalls, routing, DNS, SSL etc.
- Extreme focus on security & reliability
- Experience in database tuning is an asset
- You are motivated, proactive, you have eyes for details, cloud infrastructure management is your passion
Working at Peliqan
- Attractive salary
- Dynamic, welcoming and multicultural working environment
About Slintel (a 6sense company) :
Slintel, a 6sense company, the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.
Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.
Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.
We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.
6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.
6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.
Linkedin (Slintel) : https://www.linkedin.com/company/slintel/
Industry : Software Development
Company size : 51-200 employees (189 on LinkedIn)
Headquarters : Mountain View, California
Founded : 2016
Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.
Website (Slintel) : https://www.slintel.com/slintel
Linkedin (6sense) : https://www.linkedin.com/company/6sense/
Industry : Software Development
Company size : 501-1,000 employees (937 on LinkedIn)
Headquarters : San Francisco, California
Founded : 2013
Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales
Website (6sense) : https://6sense.com/
Acquisition News :
Funding Details & News :
Slintel funding : https://www.crunchbase.com/organization/slintel
6sense funding : https://www.crunchbase.com/organization/6sense
Slintel & 6sense Customers :
About the job
- Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
- Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
- Assemble large, complex data sets from third-party vendors to meet business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems
- 3+ years of experience in a Data Engineer role
- Proficiency in Linux
- Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
- Must have experience with Python/ Scala
- Must have experience with Big Data technologies like Apache Spark
- Must have experience with Apache Airflow
- Experience with data pipeline and ETL tools like AWS Glue
- Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake
Desired Skills and Experience
Python, SQL, Scala, Spark, ETL
We are hiring for Data Engineer.
- Exp: 2-4 Years
- CTC: Up to 10 LPA
- Location: Remote, Pune, Gurugram, New Delhi
- Experience designing, building and maintaining data architecture and warehousing using AWS services
- Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C# and/or similar technologies
- Experience managing AWS resources using Terraform
- Experience in Data engineering and infrastructure work for analytical and machine learning processes
- Experience with ETL tooling, migrating ETL code from one technology to another will be a benefit
- Experience with Data visualisation / dashboarding tools as QA/QC data processes
If interested kindly share your updated cv at [email protected] tigihr. com
We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-ﬁdelity data assets.
In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that
- Has healthcare experience and is passionate about helping heal people,
- Loves working with data,
- Has an obsessive focus on data quality,
- Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
- Has strong data interrogation and analysis skills,
- Defaults to written communication and delivers clean documentation, and,
- Enjoys working with customers and problem solving for them.
A day in the life at Innovaccer:
- Deﬁne the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
- Measure and communicate impact to our customers.
- Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.
What You Need:
- 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative ﬁeld.
- 4+ years of experience working with relational databases like Snowﬂake, Redshift, or Postgres.
- Intermediate to advanced level SQL programming skills.
- Data Analytics and Visualization (using tools like PowerBI)
- The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
- Ability to work in a fast-paced and agile environment.
- Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.
What we offer:
- Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
- Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
- Health benefits: We cover health insurance for you and your loved ones.
- Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
- Pet-friendly office and open floor plan: No boring cubicles.
Senior Product Analyst
Pampers Start Up Team
India / Remote Working
Our internal team focuses on App Development with data a growing area within the structure. We have a clear vision and strategy which is coupled up with App Development, Data, Testing, Solutions and Operations. The data team sits across the UK and India whilst other teams sit across Dubai, Lebanon, Karachi and various cities in India.
In this role you will use a range of tools and technologies to primarily working on providing data design, data governance, reporting and analytics on the Pampers App.
This is a unique opportunity for an ambitious candidate to join a growing business where they will get exposure to a diverse set of assignments, can contribute fully to the growth of the business and where there are no limits to career progression and reward.
● To be the Data Steward and drive governance having full understanding of all the data that flows through the Apps to all systems
● Work with the campaign team to do data fixes when issues with campaigns
● Investigate and troubleshoot issues with product and campaigns giving clear RCA and impact analysis
● Document data, create data dictionaries and be the “go to” person in understanding what data flows
● Build dashboards and reports using Amplitude, Power BI and present to the key stakeholders
● Carry out adhoc data investigations into issues with the app and present findings back querying data in BigQuery/SQL/CosmosDB
● Translate analytics into a clear powerpoint deck with actionable insights
● Write up clear documentation on processes
● Innovate with new processes or ways of providing analytics and reporting
● Help the data lead to find new ways of adding value
● Bachelor’s degree and a minimum of 4+ years’ experience in an analytical role preferably working in product analytics with consumer app data
● Strong SQL Server and Power BI required
● You have experience with most or all of these tools – SQL Server, Python, Power BI, BigQuery.
● Understanding of mobile app data (Events, CTAs, Screen Views etc)
● Knowledge of data architecture and ETL
● Experience in analyzing customer behavior and providing insightful recommendations
● Self-starter, with a keen interest in technology and highly motivated towards success
● Must be proactive and be prepared to address meetings
● Must show initiative and desire to learn business subjects
● Able to work independently and provide updates to management
● Strong analytical and problem-solving capabilities with meticulous attention to detail
● Excellent problem-solving skills; proven teamwork and communication skills
● Experience working in a fast paced “start-up like” environment
- Knowledge of mobile analytical tools (Segment, Amplitude, Adjust, Braze and Google Analytics)
- Knowledge of loyalty data
- Must have 4 to 7 years of experience in ETL Design and Development using Informatica Components.
- Should have extensive knowledge in Unix shell scripting.
- Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
- Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
- Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
- Should be good in writing complex SQL queries.
- • Selected candidates will be provided training opportunities on one or more of following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and
- Kafka would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing.
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server
Problem Formulation: Identifies possible options to address the business problems and must possess good understanding of dimension modelling
Must have worked on at least one end to end project using any Cloud Datawarehouse (Azure Synapses, AWS Redshift, Google Big query)
Good to have an understand of POWER BI and integration with any Cloud services like Azure or GCP
Experience of working with SQL Server, SSIS(Preferred)
Applied Business Acumen: Supports the development of business cases and recommendations. Owns delivery of project activity and tasks assigned by others. Supports process updates and changes. Solves business issues.
The ETL developer is responsible for designing and creating the Data warehouse and all related data extraction, transformation and load of data function in the company
The developer should provide the oversight and planning of data models, database structural design and deployment and work closely with the data architect and Business analyst
Duties include working in a cross functional software development teams (Business analyst, Testers, Developers) following agile ceremonies and development practices.
The developer plays a key role in contributing to the design, evaluation, selection, implementation and support of databases solution.
Development and Testing: Develops codes for the required solution by determining the appropriate approach and leveraging business, technical, and data requirements.
Creates test cases to review and validate the proposed solution design. Work on POCs and deploy the software to production servers.
Good to Have (Preferred Skills):
- Minimum 4-8 Years of experience in Data warehouse design and development for large scale application
- Minimum 3 years of experience with star schema, dimensional modelling and extract transform load (ETL) Design and development
- Expertise working with various databases (SQL Server, Oracle)
- Experience developing Packages, Procedures, Views and triggers
- Nice to have Big data technologies
- The individual must have good written and oral communication skills.
- Nice to have SSIS
Education and Experience
- Minimum 4-8 years of software development experience
- Bachelor's and/or Master’s degree in computer science
Please revert back with below details.
Any offers: Y/N
Present Company Name:
Reason for job change:
at Indian top MNC
Data Engineer - GCP - data migration/PostGres/ORACLE
Budget : 200,000 Rs/Month
Domain Knowledge : Banking
Contract Hire : 12 Months
No. of Open Position : 2 Nos
Roles and Responsibilities :
- This role will involve all aspects of data migration with respect to any microservice being migrated.
- Analyze the source schema for the service in question
- Skilled at using the migration design to produce an instance of any templates or artifacts required to migrate the data
- Deliver a working version of the migration design to migrate the data for the service in question
- Develop a suite of scripts that will allow technical verification - either automated (e.g. - rowcouts) or manual (e.g field comparisons)
- Assist in the triage and resolution of any data related defects
Mandatory skills :
- Previous experience of data migration
- Knowledge of PostGres and ORACLE
- Previous experience of Data Analysis