Perks of working with us:
About SkyPoint Cloud
SkyPoint’s mission is to bring people and data together.
We are the industry's first Modern Data Stack Platform with built-in data lakehouse, account 360, customer 360, entity resolution, data privacy vault, ELT / Reverse ETL, data integration, privacy compliance automation, data governance, analytics, and managed services for organizations in several industries including healthcare, life sciences, senior living, retail, hospitality, business services, and financial services.
Our platform enables organizations to take control of their customer data, deliver unmatched customer experiences and build brand loyalty.
Industry leaders and over 10 million end-users currently use SkyPoint.
SkyPoint delivers unmatched customer experiences with world-class AI and analytics. We offer consumption-based pricing and a wealth of features that drive the best outcomes for customers, employees, and brands. sing 200+ built-in connectors, SkyPoint helps you manage large volumes of data and gain actionable insights by unifying disparate systems and providing a single source of truth.
Trust is the most important part of your business and trust is what matters most to the SkyPoint Cloud team. Together, we solve complexities with transparency and turn your business into a trusted brand.
Modern Data Stack Platform, Machine Learning, Identity Resolution, Data Privacy, CCPA, GDPR, Privacy Compliance, Consent Management, Big Data, HIPAA, PCI, Sensitive Data Vault, Privacy API, Data Lake, Data Warehouse, Data Lakehouse, AI, Data as a Product, Semantic Layer, Customer 360, Data Apps, and Data Transformation
Primary Duties and Responsibilities
- Experience with Informatica Multidomain MDM 10.4 tool suite preferred
- Partnering with data architects and engineers to ensure an optimal data model design and implementation for each MDM domain in accordance with industry and MDM best practices
- Works with data governance and business steward(s) to design, develop, and configure business rules for data validation, standardize, match, and merge
- Implementation of Data Quality policies, procedures and standards along with Data Governance Team for maintenance of customer, location, product, and other data domains; Experience with Informatica IDQ tool suite preferred.
- Performs data analysis and source-to-target mapping for ingest and egress of data.
- Maintain compliance with change control, SDLC, and development standards.
- Champion the creation and contribution to technical documentation and diagrams.
- Establishes a technical vision and strategy with the team and works with the team to turn it into reality.
- Emphasis on coaching and training to cultivate skill development of team members within the department.
- Responsible for keeping up with industry best practices and trends.
- Monitor, troubleshoot, maintain, and continuously improve the MDM ecosystem.
Secondary Duties and Responsibilities
- May participate in off-hours on-call rotation.
- Attends and is prepared to participate in team, department and company meetings.
- Performs other job related duties and special projects as assigned.
This is a non-management role
Education and Experience
- Bachelor's degree in MIS, Computer Sciences, Business Administration, or related field; or High School Degree/General Education Diploma and 4 years of relevant experience in lieu of Bachelor's degree.
- 5+ years of experience in implementing MDM solutions using Informatica MDM.
- 2+ years of experience in data stewardship, data governance, and data management concepts.
- Professional working knowledge of Customer 360 solution
- Professional working knowledge in multi domain MDM data modeling.
- Strong understanding of company master data sets and their application in complex business processes and support data profiling, extraction, and cleansing activities using Informatica Data Quality (IDQ).
- Strong knowledge in the installation and configuration of the Informatica MDM Hub.
- Familiarity with real-time, near real-time and batch data integration.
- Strong experience and understanding of Informatica toolsets including Informatica MDM Hub, Informatica Data Quality (IDQ), Informatica Customer 360, Informatica EDC, Hierarchy Manager (HM), Business Entity Service Model, Address Doctor, Customizations & Composite Services
- Experience with event-driven architectures (e.g. Kafka, Google Pub/Sub, Azure Event Hub, etc.).
- Professional working knowledge of CI/CD technologies such as Concourse, TeamCity, Octopus, Jenkins, and CircleCI.
- Team player that exhibits high energy, strategic thinking, collaboration, direct communication and results orientation.
- Visual requirements include: ability to see detail at near range with or without correction. Must be physically able to perform sedentary work: occasionally lifting or carrying objects of no more than 10 pounds, and occasionally standing or walking, reaching, handling, grasping, feeling, talking, hearing and repetitive motions.
- The duties of this position are performed through a combination of an open office setting and remote work options. Full remote work options available for employees that reside outside of the Des Moines Metro Area. There is frequent pressure to meet deadlines and handle multiple projects in a day.
Equipment Used to Perform Job
- Windows, or Mac computer and various software solutions.
- Responsible for company assets including maintenance of software solutions.
- Has frequent contact with office personnel in other departments related to the position, as well as occasional contact with users and customers. Engages stakeholders from other area in the business.
- Has access to confidential information including trade secrets, intellectual property, various financials, and customer data.
- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
- You will pair to write clean and iterative code based on TDD
- Leverage various continuous delivery practices to deploy, support and operate data pipelines
- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the tradeoffs of different modeling approaches
- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
- Encouraging open communication and advocating for shared outcomes
- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop
- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
- An interest in coaching, sharing your experience and knowledge with teammates
- You enjoy influencing others and always advocate for technical excellence while being open to change when needed
- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
Role : Senior Customer Scientist
Experience : 6-8 Years
Location : Chennai (Hybrid)
Who are we?
A young, fast-growing AI and big data company, with an ambitious vision to simplify the world’s choices. Our clients are top-tier enterprises in the banking, e-commerce and travel spaces. They use our core AI-based choice engine maya.ai , to deliver personal digital experiences centered around taste. The maya.ai platform now touches over 125M customers globally. You’ll find Crayon Boxes in Chennai and Singapore. But you’ll find Crayons in every corner of the world. Especially where our client projects are – UAE, India, SE Asia and pretty soon the US.
Life in the Crayon Box is a little chaotic, largely dynamic and keeps us on our toes! Crayons are a diverse and passionate bunch. Challenges excite us. Our mission drives us. And good food, caffeine (for the most part) and youthful energy fuel us. Over the last year alone, Crayon has seen a growth rate of 3x, and we believe this is just the start.
We’re looking for young and young-at-heart professionals with a relentless drive to help Crayon double its growth. Leaders, doers, innovators, dreamers, implementers and eccentric visionaries, we have a place for you all.
Can you say “Yes, I have!” to the below?
- Experience with exploratory analysis, statistical analysis, and model development
- Knowledge of advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations
- Knowledge of software packages such as SAS, R, Rapidminer for analytical modelling and data management.
- Strong experience in SQL/ Python/R working efficiently at scale with large data sets
- Experience in using Business Intelligence tools such as PowerBI, Tableau, Metabase for business applications
Can you say “Yes, I will!” to the below?
- Drive clarity and solve ambiguous, challenging business problems using data-driven approaches. Propose and own data analysis (including modelling, coding, analytics) to drive business insight and facilitate decisions.
- Develop creative solutions and build prototypes to business problems using algorithms based on machine learning, statistics, and optimisation, and work with engineering to deploy those algorithms and create impact in production.
- Perform time-series analyses, hypothesis testing, and causal analyses to statistically assess the relative impact and extract trends
- Coordinate individual teams to fulfil client requirements and manage deliverable
- Communicate and present complex concepts to business audiences
- Travel to client locations when necessary
Crayon is an equal opportunity employer. Employment is based on a person's merit and qualifications and professional competences. Crayon does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, marital status, pregnancy or related.
More about Crayon: https://www.crayondata.com/
More about maya.ai : https://maya.ai/
Preferred Education & Experience:
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪ Data Analysis & Data Modeling
▪ Database Design & Implementation
▪ Database Performance Tuning & Optimization
▪ PL/pgSQL & SQL
5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).
5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.
Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels
Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.
Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values
Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.
• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
Function : Sr. DB Developer
Location : India/Gurgaon/Tamilnadu
>> THE INDIVIDUAL
- Have a strong background in data platform creation and management.
- Possess in-depth knowledge of Data Management, Data Modelling, Ingestion - Able to develop data models and ingestion frameworks based on client requirements and advise on system optimization.
- Hands-on experience in SQL database (PostgreSQL) and No-SQL database (MongoDB)
- Hands-on experience in performance tuning of DB
- Good to have knowledge of database setup in cluster node
- Should be well versed with data security aspects and data governance framework
- Hands-on experience in Spark, Airflow, ELK.
- Good to have knowledge on any data cleansing tool like apache Griffin
- Preferably getting involved during project implementation so have a background on business knowledge and technical requirement as well.
- Strong analytical and problem-solving skills. Have exposure to data analytics skills and knowledge of advanced data analytical tools will be an advantage.
- Strong written and verbal communication skills (presentation skills).
- Certifications in the above technologies is preferred.
- Tech /B.E. / MCA /M. Tech from a reputed institute.
Experience of Data Management, Data Modelling, Ingestion for more than 4 years. Total experience of 8-10 Years
Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
Qentelli is seeking a Solution Architect to untangle and redesign a huge granny old monolithic legacy system. Interesting part is that the new system should be commissioned module by module and legacy system should phase off accordingly. So your design will have a cutting edge future state and a transition state to get there. Implementation now is all Microsoft tech stack and will continue to be on newer Microsoft tech stack. Also there is a critical component of API management to be introduced into the solution. Performance and scalability will be at the center of your solution architecture. Data modelling is one thing that is of super high importance to know.
You’ll have a distributed team with onshore in the US and offshore in India. As a Solution Architect, you should be able to wear multiple hats of working with client on solutioning and getting it implemented by engineering and infrastructure teams that are both onshore and offshore. Right candidate will be awesome at fleshing out and documenting every finer detail of the solution, elaborate at communicating with your teams, disciplined at getting it implemented and passionate for client success.
TECHNOLOGIES YOU’LL NEED TO KNOW
Greetings from Qentelli Solutions Private Limited!
We are hiring for PostgreSQL Developer
Experience: 4 to 12 years
Job Location: Hyderabad
- Experience in RDBMS (PostgreSQL preferred), Database Backend development, Data Modelling, Performance Tuning, exposure to NoSQL DB, Kubernetes or Cloud (AWS/Azure/GCS)
Skillset for Developer-II:
- Experience on any Big Data Tools (Nifi, Kafka, Spark, sqoop, storm, snowflake), Database Backend development, Python, No SQL DB, API Exposure, cloud or Kubernetes exposure
Skillset for API Developer:
- API Development with extensive knowledge on any RDBMS (preferred PostgreSQL), exposure to cloud or Kubernetes
Job Title: Power BI Developer(Onsite)
Location: Park Centra, Sec 30, Gurgaon
CTC: 8 LPA
Time: 1:00 PM - 10:00 PM
Must Have Skills:
- Power BI Desktop Software
- Dax Queries
- Data modeling
- Row-level security
- Data Transformations and filtering
- SSAS and SQL
We are looking for a PBI Analytics Lead responsible for efficient Data Visualization/ DAX Queries and Data Modeling. The candidate will work on creating complex Power BI reports. He will be involved in creating complex M, Dax Queries and working on data modeling, Row-level security, Visualizations, Data Transformations, and filtering. He will be closely working with the client team to provide solutions and suggestions on Power BI.
Roles and Responsibilities:
- Accurate, intuitive, and aesthetic Visual Display of Quantitative Information: We generate data, information, and insights through our business, product, brand, research, and talent teams. You would assist in transforming this data into visualizations that represent easy-to-consume visual summaries, dashboards and storyboards. Every graph tells a story.
- Understanding Data: You would be performing and documenting data analysis, data validation, and data mapping/design. You would be mining large datasets to determine its characteristics and select appropriate visualizations.
- Project Owner: You would develop, maintain, and manage advanced reporting, analytics, dashboards and other BI solutions, and would be continuously reviewing and improving existing systems and collaborating with teams to integrate new systems. You would also contribute to the overall data analytics strategy by knowledge sharing and mentoring end users.
- Perform ongoing maintenance & production of Management dashboards, data flows, and automated reporting.
- Manage upstream and downstream impact of all changes on automated reporting/dashboards
- Independently apply problem-solving ability to identify meaningful insights to business
- Identify automation opportunities and work with a wide range of stakeholders to implement the same.
- The ability and self-confidence to work independently and increase the scope of the service line
- 3+ years of work experience as an Analytics Lead / Senior Analyst / Sr. PBI Developer.
- Sound understanding and knowledge of PBI Visualization and Data Modeling with DAX queries
- Experience in leading and mentoring a small team.