Must Have Skills:
- Good experience in Pyspark - Including Dataframe core functions and Spark SQL
- Good experience in SQL DBs - Be able to write queries including fair complexity.
- Should have excellent experience in Big Data programming for data transformation and aggregations
- Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
- Good customer communication.
- Good Analytical skills
Technology Skills (Good to Have):
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
- Azure Synapse or Azure SQL data warehouse
- Spark on Azure is available in HD insights and data bricks
About UAE Client
Similar jobs
Job Description
Responsibilities:
- Collaborate with stakeholders to understand business objectives and requirements for AI/ML projects.
- Conduct research and stay up-to-date with the latest AI/ML algorithms, techniques, and frameworks.
- Design and develop machine learning models, algorithms, and data pipelines.
- Collect, preprocess, and clean large datasets to ensure data quality and reliability.
- Train, evaluate, and optimize machine learning models using appropriate evaluation metrics.
- Implement and deploy AI/ML models into production environments.
- Monitor model performance and propose enhancements or updates as needed.
- Collaborate with software engineers to integrate AI/ML capabilities into existing software systems.
- Perform data analysis and visualization to derive actionable insights.
- Stay informed about emerging trends and advancements in the field of AI/ML and apply them to improve existing solutions.
Strong experience in Apache pyspark is must
Requirements:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience of 3-5 years as an AI/ML Engineer or a similar role.
- Strong knowledge of machine learning algorithms, deep learning frameworks, and data science concepts.
- Proficiency in programming languages such as Python, Java, or C++.
- Experience with popular AI/ML libraries and frameworks, such as TensorFlow, Keras, PyTorch, or scikit-learn.
- Familiarity with cloud platforms, such as AWS, Azure, or GCP, and their AI/ML services.
- Solid understanding of data preprocessing, feature engineering, and model evaluation techniques.
- Experience in deploying and scaling machine learning models in production environments.
- Strong problem-solving skills and ability to work on multiple projects simultaneously.
- Excellent communication and teamwork skills.
Preferred Skills:
- Experience with natural language processing (NLP) techniques and tools.
- Familiarity with big data technologies, such as Hadoop, Spark, or Hive.
- Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes.
- Understanding of DevOps practices for AI/ML model deployment
-Apache ,Pyspark
ketteQ is a supply chain planning and automation platform. We are looking for extremely strong and experienced Technical Consultant to help with system design, data engineering and software configuration and testing during the implementation of supply chain planning solutions. This job comes with a very attractive compensation package, and work-from-home benefit. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you.
Responsible for technical design and implementation of supply chain planning solutions.
Responsibilities
- Design and document system architecture
- Design data mappings
- Develop integrations
- Test and validate data
- Develop customizations
- Deploy solution
- Support demo development activities
Requirements
- Minimum 5 years experience in technical implementation of Enterprise software preferably Supply Chain Planning software
- Proficiency in ANSI/postgreSQL
- Proficiency in ETL tools such as Pentaho, Talend, Informatica, and Mulesoft
- Experience with Webservices and REST APIs
- Knowledge of AWS
- Salesforce and Tableau experience a plus
- Excellent analytical skills
- Must possess excellent verbal and written communication skills and be able to communicate effectively with international clients
- Must be a self-starter and highly motivated individual who is looking to make a career in supply chain management
- Quick thinker with proven decision-making and organizational skills
- Must be flexible to work non-standard hours to accommodate globally dispersed teams and clients
Education
- Bachelors in Engineering from a top-ranked university with above average grades
Designation: Senior - DBA
Experience: 6-9 years
CTC: INR 17-20 LPA
Night Allowance: INR 800/Night
Location: Hyderabad,Hybrid
Notice Period: NA
Shift Timing : 6:30 pm to 3:30 am
Openings: 3
Roles and Responsibilities:
As a Senior Database Administrator is responsible for the physical design development
administration and optimization of properly engineered database systems to meet agreed
business and technical requirements.
The candidate will work as part of but not limited to the Onsite/Offsite DBA
group-Administration and management of databases in Dev Stage and Production
environments
Performance tuning of database schema stored procedures etc.
Providing technical input on the setup configuration of database servers and SAN disk
subsystem on all database servers.
Troubleshooting and handling all database related issues and tracking them through to
resolution.
Pro-active monitoring of databases both from a performance and capacity management
perspective.
Performing database maintenance activities such as backup/recovery rebuilding and
reorganizing indexes.
Ensuring that all database releases are properly assessed and measured from a
functionality and performance perspective.
Ensuring that all databases are up to date with the latest service packs patches &
security fixes.
Take ownership and ensure high quality timely delivery of projects on hand.
Collaborate with application/database developers quality assurance and
operations/support staff
Will help manage large high transaction rate SQL Server production
Eligibility:
Bachelors/Master Degree (BE/BTech/MCA/MTect/MS)
6 - 8 years of solid experience in SQL Server 2016/2019 Database administration and
maintenance on Azure and AWS cloud.
Experience handling and managing large SQL Server databases in a real time production
environment with sizes greater than 200+ GB
Experience in troubleshooting and resolving database integrity issues performance
issues blocking/deadlocking issues connectivity issues data replication issues etc.
Experience on Configuration Trouble shoot on SQL Server HA
Ability to detect and troubleshoot database related CPUmemoryI/Odisk space and other
resource contention issues.
Experience with database maintenance activities such as backup/recovery & capacity
monitoring/management and Azure Backup Services.
Experience with HA/Failover technologies such as Clustering SAN Replication Log
shipping & mirroring.
Experience collaborating with development teams on physical database design activities
and performance tuning.
Experience in managing and making software deployments/changes in real time
production environments.
Ability to work on multiple projects at one time with minimal supervision and ensure high
quality timely delivery.
Knowledge on tools like SQL Lite speed SQL Diagnostic Manager App Dynamics.
Strong understanding of Data Warehousing concepts and SQL server Architecture
Certified DBA Proficient in TSQL Proficient in the various Storage technologies such as
ASM SAN NAS RAID Multi patching
Strong analytical and problem solving skills Proactive independent and proven ability to
work under tight target and pressure
Experience working in a highly regulated environment such as a financial services
institutions
Expertise in SSIS SSRS
Skills:
SSIS
SSRS
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.
Role – Data Engineer
Responsibilities
Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
Assemble large, complex data sets from third-party vendors to meet business
requirements.
Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.
Requirements
5+ years of experience in a Data Engineer role.
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Data Engineering : Senior Engineer / Manager
As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.
Must Have skills :
1. GCP
2. Spark streaming : Live data streaming experience is desired.
3. Any 1 coding language: Java/Pyhton /Scala
Skills & Experience :
- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies
- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
- Strong experience in at least of the programming language Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.
- Well-versed and working knowledge with data platform related services on GCP
- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position
Your Impact :
- Data Ingestion, Integration and Transformation
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
- Build functionality for data analytics, search and aggregation
Job Title: Senior Data Engineer
Experience: 8Yrs to 11Yrs
Location: Remote
Notice: Immediate or Max 1Month
Role: Permanent Role
Skill set: Google Cloud Platform, Big Query, Java, Python Programming Language, Airflow, Data flow, Apache Beam.
Experience required:
5 years of experience in software design and development with 4 years of experience in the data engineering field is preferred.
2 years of Hands-on experience in GCP cloud data implementation suites such as Big Query, Pub Sub, Data Flow/Apache Beam, Airflow/Composer, Cloud Storage, etc.
Strong experience and understanding of very large-scale data architecture, solutions, and operationalization of data warehouses, data lakes, and analytics platforms.
Mandatory 1 year of software development skills using Java or Python.
Extensive hands-on experience working with data using SQL and Python.
Must Have: GCP, Big Query, Airflow, Data flow, Python, Java.
GCP knowledge must
Java as programming language(preferred)
Big Query, Pub-Sub, Data Flow/Apache Beam, Airflow/Composer, Cloud Storage,
Python
Communication should be good.
Bigdata with cloud:
Experience : 5-10 years
Location : Hyderabad/Chennai
Notice period : 15-20 days Max
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
- Excellent working knowledge on Data Warehousing /Data Migration activity using an ETL tool.
- Strong Data Integration, PostgreSQL/Oracle Database skills, Shell Scripting, Python programming, and development know-how.
- Hands-on experience in working with and generating XML documents.
- Good analytical and business process understanding capability.
- Familiarized with Data Models, Source-Target Data Mapping, Transactional, and Master Data concepts.
- Well-experienced in High level/Detailed design, Performance tuning of ETL jobs.
- Very good communication skills, interpersonal skills, stakeholder management skills, self-motivated, quick learner, team player.
- Exposure to After Sales Business Domain is highly preferred.
- Experience using HP ALM, Jira for ticketing.
- Experience release management
JD:
Required Skills:
- Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
- Strong practical knowledge of SQL.
Hands on experience on Spark/SparkSQL - Data Structure and Algorithms
- Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
- Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
- Experience on NoSQL Databases like HBase, etc
- Experience with Linux OS environment (Shell script, AWK, SED)
- Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)