Pingahla is recruiting business intelligence Consultants/Senior consultants who can help us with Information Management projects (domestic, onshore and offshore) as developers and team leads. The candidates are expected to have 3-6 years of experience with Informatica Power Center/Talend DI/Informatica Cloud and must be very proficient with Business Intelligence in general. The job is based out of our Pune office.
Responsibilities:
- Manage the customer relationship by serving as the single point of contact before, during and after engagements.
- Architect data management solutions.
- Provide technical leadership to other consultants and/or customer/partner resources.
- Design, develop, test and deploy data integration solutions in accordance with customer’s schedule.
- Supervise and mentor all intermediate and junior level team members.
- Provide regular reports to communicate status both internally and externally.
- Qualifications:
- A typical profile that would suit this position would be if the following background:
- A graduate from a reputed engineering college
- An excellent I.Q and analytical skills and should be able to grasp new concepts and learn new technologies.
- A willingness to work with a small team in a fast-growing environment.
- A good knowledge of Business Intelligence concepts
Mandatory Requirements:
- Knowledge of Business Intelligence
- Good knowledge of at least one of the following data integration tools - Informatica Powercenter, Talend DI, Informatica Cloud
- Knowledge of SQL
- Excellent English and communication skills
- Intelligent, quick to learn new technologies
- Track record of accomplishment and effectiveness with handling customers and managing complex data management needs
About Pinghala
Similar jobs
Azure Data Engineering :
Azure Data Factory (ADF): Proficient level. Candidate should have experience on creating and managing data pipelines using Azure Data Factory. Should have experience in integrating Azure Data Factory with other Azure services such as Azure Databricks, Azure Data Lake Storage.
Azure Data Lake Storage (ADLS): Proficient level. Candidate should have experience on Integrating Azure Data Lake Storage Gen2 with other Azure services such as ADF, Azure DataBricks. Should have experience on Troubleshooting and resolving issues with Azure Data Lake Storage Gen2.
Azure DataBricks (ADB) : Proficient level. Candidate should have experience on implementing data transformations using Apache Spark and PySpark. Should have experience on integrating Azure Databricks with other Azure services such as Azure Data Lake Storage Gen2, Azure SQL Database or any data warehouse. Should have worked on various data formats such as parquet, avro, json delta etc.
Azure Dedicated SQL pools / Azure SQL DB: Proficient level. Candidate should have worked on at least one of Azure Dedicated SQL pools / Azure SQL DB. Should have experience on integrating Azure Dedicated SQL pools / Azure SQL DB with other azure services such as ADB, ADF etc.
SQL Knowledge: Proficient level. Candidate should have experience of writing complex SQL queries to retrieve and manipulate data from databases. Candidate should be able to optimize SQL queries for performance and cost efficiency. Should have worked on troubleshooting and resolving issues with databases and SQL queries.
Azure Synapse Analytics: Good to have.
Power BI: Good to have
Title: Data Engineer – Snowflake
Location: Mysore (Hybrid model)
Exp-2-8 yrs
Type: Full Time
Walk-in date: 25th Jan 2023 @Mysore
Job Role: We are looking for an experienced Snowflake developer to join our team as a Data Engineer who will work as part of a team to help design and develop data-driven solutions that deliver insights to the business. The ideal candidate is a data pipeline builder and data wrangler who enjoys building data-driven systems that drive analytical solutions and building them from the ground up. You will be responsible for building and optimizing our data as well as building automated processes for production jobs. You will support our software developers, database architects, data analysts and data scientists on data initiatives
Key Roles & Responsibilities:
- Use advanced complex Snowflake/Python and SQL to extract data from source systems for ingestion into a data pipeline.
- Design, develop and deploy scalable and efficient data pipelines.
- Analyze and assemble large, complex datasets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements. For example: automating manual processes, optimizing data delivery, re-designing data platform infrastructure for greater scalability.
- Build required infrastructure for optimal extraction, loading, and transformation (ELT) of data from various data sources using AWS and Snowflake leveraging Python or SQL technologies.
- Monitor cloud-based systems and components for availability, performance, reliability, security and efficiency
- Create and configure appropriate cloud resources to meet the needs of the end users.
- As needed, document topology, processes, and solution architecture.
- Share your passion for staying on top of tech trends, experimenting with and learning new technologies
Qualifications & Experience
Qualification & Experience Requirements:
- Bachelor's degree in computer science, computer engineering, or a related field.
- 2-8 years of experience working with Snowflake
- 2+ years of experience with the AWS services.
- Candidate should able to write the stored procedure and function in Snowflake.
- At least 2 years’ experience in snowflake developer.
- Strong SQL Knowledge.
- Data injection in snowflake using Snowflake procedure.
- ETL Experience is Must (Could be any tool)
- Candidate should be aware of snowflake architecture.
- Worked on the Migration project
- DW Concept (Optional)
- Experience with cloud data storage and compute components including lambda functions, EC2s, containers.
- Experience with data pipeline and workflow management tools: Airflow, etc.
- Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources
- Experience working with Linux and UNIX environments.
- Experience with profiling data, with and without data definition documentation
- Familiar with Git
- Familiar with issue tracking systems like JIRA (Project Management Tool) or Trello.
- Experience working in an agile environment.
Desired Skills:
- Experience in Snowflake. Must be willing to be Snowflake certified in the first 3 months of employment.
- Experience with a stream-processing system: Snowpipe
- Working knowledge of AWS or Azure
- Experience in migrating from on-prem to cloud systems
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
ketteQ is a supply chain planning and automation platform. We are looking for extremely strong and experienced Technical Consultant to help with system design, data engineering and software configuration and testing during the implementation of supply chain planning solutions. This job comes with a very attractive compensation package, and work-from-home benefit. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you.
Responsible for technical design and implementation of supply chain planning solutions.
Responsibilities
- Design and document system architecture
- Design data mappings
- Develop integrations
- Test and validate data
- Develop customizations
- Deploy solution
- Support demo development activities
Requirements
- Minimum 5 years experience in technical implementation of Enterprise software preferably Supply Chain Planning software
- Proficiency in ANSI/postgreSQL
- Proficiency in ETL tools such as Pentaho, Talend, Informatica, and Mulesoft
- Experience with Webservices and REST APIs
- Knowledge of AWS
- Salesforce and Tableau experience a plus
- Excellent analytical skills
- Must possess excellent verbal and written communication skills and be able to communicate effectively with international clients
- Must be a self-starter and highly motivated individual who is looking to make a career in supply chain management
- Quick thinker with proven decision-making and organizational skills
- Must be flexible to work non-standard hours to accommodate globally dispersed teams and clients
Education
- Bachelors in Engineering from a top-ranked university with above average grades
Wolken Software provides a suite of AI-enabled, SaaS 2.0 cloud-native applications for Customer Service and Enterprise Solutions namely Wolken Service Desk, Wolken's IT Service Management, and Wolken's HR Case Management. We have replaced incumbents like Salesforce, ServiceNow Zendesk, etc. at various Fortune 500 and Fortune 1000 companies.
Job Description:
We are looking for a Business Analyst with strong organizational and planning skills and a proven ability to multitask in a fast-paced environment. We are seeking a high level of motivation and a self-starting attitude, and the ability to work with minimal direction and supervision. The ideal candidate will have a metrics/data-driven analytical mindset of constantly measuring success and driving continuous improvement in tools, content, and programs.
Business Analyst Qualifications / Skills:
- Able to exercise independent judgment and take action on it.
- Excellent analytical, and creative problem-solving skills
- Excellent listening, interpersonal, written, and oral communication skills
- Logical and efficient, with keen attention to detail
- Highly self-motivated and directed.
- Ability to effectively prioritize and execute tasks while under pressure.
- Strong customer service orientation
- Experience working in a team-oriented, collaborative environment.
- BE, MBA Preferable.
Responsibilities:
- Data analysis
- Business Insights
- Leadership
- Influence
- Strategy
- Problem structuring
- Project management
4 - 8 overall experience.
- 1-2 years’ experience in Azure Data Factory - schedule Jobs in Flows and ADF Pipelines, Performance Tuning, Error logging etc..
- 1+ years of experience with Power BI - designing and developing reports, dashboards, metrics and visualizations in Powe BI.
- (Required) Participate in video conferencing calls - daily stand-up meetings and all day working with team members on cloud migration planning, development, and support.
- Proficiency in relational database concepts & design using star, Azure Datawarehouse, and data vault.
- Requires 2-3 years of experience with SQL scripting (merge, joins, and stored procedures) and best practices.
- Knowledge on deploying and run SSIS packages in Azure.
- Knowledge of Azure Data Bricks.
- Ability to write and execute complex SQL queries and stored procedures.
What we look for:
We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms.
Responsibilities:
- Work with multiple stakeholders to gather the requirements of data or analysis and take action on them.
- Write new data pipelines and maintain the existing pipelines.
- Person will be gathering data from various DB’s and will be finding the required metrics out of it.
Required Skills:
- Experience with python and Libraries like Pandas,and Numpy.
- Experience in SQL and understanding of NoSQL DB’s.
- Hands-on experience in Data engineering.
- Must have good analytical skills and knowledge of statistics.
- Understanding of Data Science concepts.
- Bachelor degree in Computer Science or related field.
- Problem-solving skills and ability to work under pressure.
Nice to have:
- Experience in MongoDB or any NoSql DB.
- Experience in ElasticSearch.
- Knowledge of Tableau, Power BI or any other visualization tool.
Job Description
Role requires experience in AWS and also programming experience in Python and Spark
Roles & Responsibilities
You Will:
- Translate functional requirements into technical design
- Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core cloud services needed to fulfil the technical design
- Design, Develop and Deliver data integration interfaces in the AWS
- Design, Develop and Deliver data provisioning interfaces to fulfil consumption needs
- Deliver data models on Cloud platform, it could be on AWS Redshift, SQL.
- Design, Develop and Deliver data integration interfaces at scale using Python / Spark
- Automate core activities to minimize the delivery lead times and improve the overall quality
- Optimize platform cost by selecting right platform services and architecting the solution in a cost-effective manner
- Manage code and deploy DevOps and CI CD processes
- Deploy logging and monitoring across the different integration points for critical alerts
You Have:
- Minimum 5 years of software development experience
- Bachelor's and/or Master’s degree in computer science
- Strong Consulting skills in data management including data governance, data quality, security, data integration, processing and provisioning
- Delivered data management projects in any of the AWS
- Translated complex analytical requirements into technical design including data models, ETLs and Dashboards / Reports
- Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
- Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
- Successfully delivered large scale data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
- Strong knowledge of continuous integration, static code analysis and test-driven development
- Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
- Must have Excellent analytical and problem-solving skills
- Delivered change management initiatives focused on driving data platforms adoption across the enterprise
- Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
JD:
Required Skills:
- Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
- Strong practical knowledge of SQL.
Hands on experience on Spark/SparkSQL - Data Structure and Algorithms
- Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
- Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
- Experience on NoSQL Databases like HBase, etc
- Experience with Linux OS environment (Shell script, AWK, SED)
- Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)