50+ PySpark Jobs in India
Apply to 50+ PySpark Jobs on CutShort.io. Find your next job, effortlessly. Browse PySpark Jobs and apply today!
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As a Junior ML/ AI Engineer, you will help design and develop intelligent software to solve business problems. You will collaborate with senior ML engineers, data scientists and domain experts to incorporate ML and AI technologies into existing or new workflows. You’ll analyze new opportunities and ideas. You’ll train and evaluate ML models, conduct experiments, help develop PoCs and prototypes.
Responsibilities
- Designing, training, improving & launching machine learning models using tools such as XGBoost, Tensorflow, PyTorch.
- Contribute directly to the improvement of the way we evaluate and monitor model and system performances.
- Proposing and implementing ideas that directly impact our operational and strategic metrics.
Who you are
You are an engineer who is passionate about using AL/ML to improve processes, products and delight customers. You have experience working with less than clean data, developing and tweaking ML models, and are interested deeply in getting these models into production as cost effectively as possible. You thrive on taking initiatives, are very comfortable with ambiguity and can passionately defend your decisions.
Requirements and skills
- 3+ years of experience in programming languages such as Python, PySpark, or Scala.
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes),
- Beginner level knowledge of MLOps practices and platforms like MLflow.
- Strong understanding of ML algorithms and frameworks (e.g., TensorFlow, PyTorch).
- Broad understanding of data structures, data engineering, statistical methodologies and machine learning models.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to be present in the city. We intend to move to a hybrid model in a few months time.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
About Tazapay
Tazapay is a cross-border payment service provider, offering local collections via local payment methods, virtual accounts, and cards in over 70 markets. Merchants can conduct global transactions without creating local entities, while Tazapay ensures compliance with local regulations, resulting in reduced transaction costs, FX transparency, and higher authorization rates. Licensed and backed by top investors, Tazapay provides secure and efficient solutions.
What’s Exciting Waiting for You?
This is an incredible opportunity to join our team before we scale up. You'll be part of a growth story that includes roles across Sales, Software Development, Marketing, HR, and Accounting. Enjoy a unique experience building something from the ground up and gain the satisfaction of seeing your work impact thousands of customers. We foster a culture of openness, innovation, and creating great memories together. Ready for the ride?
About the Role
As a Data Engineer on our team, you’ll design, develop, and maintain ETL processes using AWS Glue, PySpark, Athena, QuickSight, and other data engineering tools. Responsibilities include transforming and loading data into our data lake, optimizing data models for efficient querying, implementing data governance policies, and ensuring data security, privacy, and compliance.
Requirements
Must-Have
- BE / B.Tech / M.Sc / MCA / ME / M.Tech
- 5+ years of expertise in data engineering
- Strong experience in AWS Glue PySpark for ETL processes
- Proficient with AWS Athena & SQL for data querying
- Expertise in AWS QuickSight for data visualization and dashboarding
- Solid understanding of data lake concepts, data modeling, and best practices
- Strong experience with AWS cloud services, including S3, Lambda, Redshift, Glue, and IAM
- Excellent analytical, problem-solving, written, and verbal communication skills
Nice-to-Have
- Fintech domain knowledge
- AWS certifications
- Experience with large data lake implementations
a leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
What are we looking for?
- Bachelor’s degree in analytics related area (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline)
- 7+ years of work experience in data science, analytics, engineering, or product management for a diverse range of projects
- Hands on experience with python, deploying ML models
- Hands on experience with time-wise tracking of model performance, and diagnosis of data/model drift
- Familiarity with Dataiku, or other data-science-enabling tools (Sagemaker, etc)
- Demonstrated familiarity with distributed computing frameworks (snowpark, pyspark)
- Experience working with various types of data (structured / unstructured)
- Deep understanding of all data science phases (eg. Data engineering, EDA, Machine Learning, MLOps, Serving)
- Highly self-motivated to deliver both independently and with strong team collaboration
- Ability to creatively take on new challenges and work outside comfort zone
- Strong English communication skills (written & verbal)
Roles & Responsibilities:
- Clearly articulates expectations, capabilities, and action plans; actively listens with others’ frame of reference in mind; appropriately shares information with team; favorably influences people without direct authority
- Clearly articulates scope and deliverables of projects; breaks complex initiatives into detailed component parts and sequences actions appropriately; develops action plans and monitors progress independently; designs success criteria and uses them to track outcomes; drives implementation of recommendations when appropriate, engages with stakeholders throughout to ensure buy-in
- Manages projects with and through others; shares responsibility and credit; develops self and others through teamwork; comfortable providing guidance and sharing expertise with others to help them develop their skills and perform at their best; helps others take appropriate risks; communicates frequently with team members earning respect and trust of the team
- Experience in translating business priorities and vision into product/platform thinking, set clear directives to a group of team members with diverse skillsets, while providing functional & technical guidance and SME support
- Demonstrated experience interfacing with internal and external teams to develop innovative data science solutions
- Strong business analysis, product design, and product management skills
- Ability to work in a collaborative environment - reviewing peers' code, contributing to problem- solving sessions, ability to communicate technical knowledge to a a variety of audience (such as management, brand teams, data engineering teams, etc.)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
- Work with all sizes of ML model from linear/logistic regression to other sklearn-like models, to deep learning
- Formulate training schema for unbiased model training e.g. K-fold cross-validation, leave-one- out cross-validation) for parameter searching and model tuning
- Ability to work on machine learning work like recommender systems, end-to-end ML lifecycle)
- Ability to manage ML on largely imbalanced training sets (<5% positive rate)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
Skills / Tools
- SQL & Python / PySpark
- AWS Services: Glue, Appflow, Redshift - Mandatory
- Data warehousing basics
- Data modelling basics
Job Description
- Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services
- A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes
- Work with stakeholders to identify business needs and requirements for data-related project
- Strong SQL and/or Python or PySpark knowledge
- Creating data models that can be used to extract information from various sources & store it in a usable format
- Optimize data models for performance and efficiency
- Write SQL queries to support data analysis and reporting
- Monitor and troubleshoot data pipelines
- Collaborate with software engineers to design and implement data-driven features
- Perform root cause analysis on data issues
- Maintain documentation of the data architecture and ETL processes
- Identifying opportunities to improve performance by improving database structure or indexing methods
- Maintaining existing applications by updating existing code or adding new features to meet new requirements
- Designing and implementing security measures to protect data from unauthorized access or misuse
- Recommending infrastructure changes to improve capacity or performance
- Experience in Process industry
ob Description:
We are seeking an experienced Azure Data Engineer with expertise in Azure Data Factory, Azure Databricks, and Azure Data Fabric to lead the migration of our existing data pipeline and processing infrastructure. The ideal candidate will have a strong background in Azure cloud data services, big data analytics, and data engineering, with specific experience in Azure Data Fabric. We are looking for someone who has at least 6 months of hands-on experience with Azure Data Fabric or has successfully completed at least one migration to Azure Data Fabric.
Key Responsibilities:
- Assess the current data architecture using Azure Data Factory and Databricks and develop a detailed migration plan to Azure Data Fabric.
- Design and implement end-to-end data pipelines within Azure Data Fabric, including data ingestion, transformation, storage, and analytics.
- Optimize data workflows to leverage Azure Data Fabric's unified platform for data integration, big data processing, and real-time analytics.
- Ensure seamless integration of data from SharePoint and other sources into Azure Data Fabric, maintaining data quality and integrity.
- Collaborate with business analysts and business stakeholders to align data strategies and optimize the data environment for machine learning and AI workloads.
- Implement security best practices, including data governance, access control, and monitoring within Azure Data Fabric.
- Conduct performance tuning and optimization for data storage and processing within Azure Data Fabric to ensure high availability and cost efficiency.
Key Requirements:
- Proven experience (5+ years) in Azure data engineering with a strong focus on Azure Data Factory and Azure Databricks.
- At least 6 months of hands-on experience with Azure Data Fabric or completion of one migration to Azure Data Fabric.
- Hands-on experience in designing, building, and managing data pipelines, data lakes, and data warehouses on Azure.
- Expertise in Spark, SQL, and data transformation techniques within Azure environments.
- Strong understanding of data governance, security, and compliance in cloud environments.
- Experience with migrating data architectures and optimizing workflows on cloud platforms.
- Ability to work collaboratively with cross-functional teams and communicate technical concepts effectively to non-technical stakeholders.
- Azure certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect Expert) are a plus.
Key requirements:
- The person should have at least 6 months of work experience in Data Fabric. Make sure the experience is not less than 6 months
- Solid technical skills: data bricks, data fabric and data factory
- Polished, good communication and interpersonal skills
- The Person should have at least 6 years of experience in Databricks, Datafactory.
at HighLevel Inc.
Who We Are:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel- https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage
About the role
We are looking for a senior AI engineer for the platform team. The ideal candidate will be responsible for deriving key insights from huge sets of data, building models around those, and taking them to production.
● Implementation
- Analyze data to gain new insights, develop custom data models and algorithms to apply to distributed data sets at scale
- Build continuous integration, test-driven development, and production deployment frameworks
● Architecture- Design the architecture with the Data and DevOps engineers
● Ownership- Take ownership of the accuracy of the models and coordinate with the stakeholders to keep moving forward
● Releases- Take ownership of the releases and pre-plan according to the needs
● Quality- Maintain high standards of code quality through regular design and code reviews.
Qualifications
- Extensive hands-on experience in Python/R is a must.
- 3 years of AI engineering experience
- Proficiency in ML/DL frameworks and tools (e. g. Pandas, Numpy, Scikit-learn, Pytorch, Lightning, Huggingface)
- Strong command in low-level operations involved in building architectures for Ensemble Models, NLP, CV (eg. XGB, Transformers, CNNs, Diffusion Models)
- Experience with end-to-end system design; data analysis, distributed training, model optimisation, and evaluation systems, for large-scale training & prediction.
- Experience working with huge datasets (ideally in TeraBytes) would be a plus.
- Experience with frameworks like Apache Spark (using MLlib or PySpark), Dask etc.
- Practical experience in API development (e. g., Flask, FastAPI .
- Experience with MLOps principles (scalable development & deployment of complex data science workflows) and associated tools, e. g. MLflow, Kubeflow, ONNX
- Bachelor's degree or equivalent experience in Engineering or a related field of study
- Strong people, communication, and problem-solving skills
What to Expect when you Apply
● Exploratory Call
● Technical Round I/II
● Assignment
● Cultural Fitment Round
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organization stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
Greetings , Wissen Technology is Hiring for the position of Data Engineer
Please find the Job Description for your Reference:
JD
- Design, develop, and maintain data pipelines on AWS EMR (Elastic MapReduce) to support data processing and analytics.
- Implement data ingestion processes from various sources including APIs, databases, and flat files.
- Optimize and tune big data workflows for performance and scalability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Manage and monitor EMR clusters, ensuring high availability and reliability.
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and store data in data lakes and data warehouses.
- Implement data security best practices to ensure data is protected and compliant with relevant regulations.
- Create and maintain technical documentation related to data pipelines, workflows, and infrastructure.
- Troubleshoot and resolve issues related to data processing and EMR cluster performance.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in data engineering, with a focus on big data technologies.
- Strong experience with AWS services, particularly EMR, S3, Redshift, Lambda, and Glue.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data frameworks and tools such as Hadoop, Spark, Hive, and Pig.
- Solid understanding of data modeling, ETL processes, and data warehousing concepts.
- Experience with SQL and NoSQL databases.
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Strong problem-solving skills and the ability to work independently and collaboratively in a team environment
- Sr. Solution Architect
- Job Location – Bangalore
- Need candidates who can join in 15 days or less.
- Overall, 12-15 years of experience.
Looking for this tech stack in a Sr. Solution Architect (who also has a Delivery Manager background). Someone who has heavy business and IT stakeholder collaboration and negotiation skills, someone who can provide thought leadership, collaborate in the development of Product roadmaps, influence decisions, negotiate effectively with business and IT stakeholders, etc.
- Building data pipelines using Azure data tools and services (Azure Data Factory, Azure Databricks, Azure Function, Spark, Azure Blob/ADLS, Azure SQL, Snowflake..)
- Administration of cloud infrastructure in public clouds such as Azure
- Monitoring cloud infrastructure, applications, big data pipelines and ETL workflows
- Managing outages, customer escalations, crisis management, and other similar circumstances.
- Understanding of DevOps tools and environments like Azure DevOps, Jenkins, Git, Ansible, Terraform.
- SQL, Spark SQL, Python, PySpark
- Familiarity with agile software delivery methodologies
- Proven experience collaborating with global Product Team members, including Business Stakeholders located in NA
Responsibilities:
- Lead the design, development, and implementation of scalable data architectures leveraging Snowflake, Python, PySpark, and Databricks.
- Collaborate with business stakeholders to understand requirements and translate them into technical specifications and data models.
- Architect and optimize data pipelines for performance, reliability, and efficiency.
- Ensure data quality, integrity, and security across all data processes and systems.
- Provide technical leadership and mentorship to junior team members.
- Stay abreast of industry trends and best practices in data architecture and analytics.
- Drive innovation and continuous improvement in data management practices.
Requirements:
- Bachelor's degree in Computer Science, Information Systems, or a related field. Master's degree preferred.
- 5+ years of experience in data architecture, data engineering, or a related field.
- Strong proficiency in Snowflake, including data modeling, performance tuning, and administration.
- Expertise in Python and PySpark for data processing, manipulation, and analysis.
- Hands-on experience with Databricks for building and managing data pipelines.
- Proven leadership experience, with the ability to lead cross-functional teams and drive projects to successful completion.
- Experience in the banking or insurance domain is highly desirable.
- Excellent communication skills, with the ability to effectively collaborate with stakeholders at all levels of the organization.
- Strong problem-solving and analytical skills, with a keen attention to detail.
Benefits:
- Competitive salary and performance-based incentives.
- Comprehensive benefits package, including health insurance, retirement plans, and wellness programs.
- Flexible work arrangements, including remote options.
- Opportunities for professional development and career advancement.
- Dynamic and collaborative work environment with a focus on innovation and continuous learning.
Sr. Data Engineer (Data Warehouse-Snowflake)
Experience: 5+yrs
Location: Pune (Hybrid)
As a Senior Data engineer with Snowflake expertise you are a subject matter expert who is curious and an innovative thinker to mentor young professionals. You are a key person to convert Vision and Data Strategy for Data solutions and deliver them. With your knowledge you will help create data-driven thinking within the organization, not just within Data teams, but also in the wider stakeholder community.
Skills Preferred
- Advanced written, verbal, and analytic skills, and demonstrated ability to influence and facilitate sustained change. Ability to convey information clearly and concisely to all levels of staff and management about programs, services, best practices, strategies, and organizational mission and values.
- Proven ability to focus on priorities, strategies, and vision.
- Very Good understanding in Data Foundation initiatives, like Data Modelling, Data Quality Management, Data Governance, Data Maturity Assessments and Data Strategy in support of the key business stakeholders.
- Actively deliver the roll-out and embedding of Data Foundation initiatives in support of the key business programs advising on the technology and using leading market standard tools.
- Coordinate the change management process, incident management and problem management process.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery
Knowledge Preferred
- Extensive knowledge and hands on experience with Snowflake and its different components like User/Group, Data Store/ Warehouse management, External Stage/table, working with semi structured data, Snowpipe etc.
- Implement and manage CI/CD for migrating and deploying codes to higher environments with Snowflake codes.
- Proven experience with Snowflake Access control and authentication, data security, data sharing, working with VS Code extension for snowflake, replication, and failover, optimizing SQL, analytical ability to troubleshoot and debug on development and production issues quickly is key for success in this role.
- Proven technology champion in working with relational, Data warehouses databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Highly Experienced in building and optimizing complex queries. Good with manipulating, processing, and extracting value from large, disconnected datasets.
- Your experience in handling big data sets and big data technologies will be an asset.
- Proven champion with in-depth knowledge of any one of the scripting languages: Python, SQL, Pyspark.
Primary responsibilities
- You will be an asset in our team bringing deep technical skills and capabilities to become a key part of projects defining the data journey in our company, keen to engage, network and innovate in collaboration with company wide teams.
- Collaborate with the data and analytics team to develop and maintain a data model and data governance infrastructure using a range of different storage technologies that enables optimal data storage and sharing using advanced methods.
- Support the development of processes and standards for data mining, data modeling and data protection.
- Design and implement continuous process improvements for automating manual processes and optimizing data delivery.
- Assess and report on the unique data needs of key stakeholders and troubleshoot any data-related technical issues through to resolution.
- Work to improve data models that support business intelligence tools, improve data accessibility and foster data-driven decision making.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Manage and lead technical design and development activities for implementation of large-scale data solutions in Snowflake to support multiple use cases (transformation, reporting and analytics, data monetization, etc.).
- Translate advanced business data, integration and analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful presentations.
- Exhibit strong knowledge of the Snowflake ecosystem and can clearly articulate the value proposition of cloud modernization/transformation to a wide range of stakeholders.
Relevant work experience
Bachelors in a Science, Technology, Engineering, Mathematics or Computer Science discipline or equivalent with 7+ Years of experience in enterprise-wide data warehousing, governance, policies, procedures, and implementation.
Aptitude for working with data, interpreting results, business intelligence and analytic best practices.
Business understanding
Good knowledge and understanding of Consumer and industrial products sector and IoT.
Good functional understanding of solutions supporting business processes.
Skill Must have
- Snowflake 5+ years
- Overall different Data warehousing techs 5+ years
- SQL 5+ years
- Data warehouse designing experience 3+ years
- Experience with cloud and on-prem hybrid models in data architecture
- Knowledge of Data Governance and strong understanding of data lineage and data quality
- Programming & Scripting: Python, Pyspark
- Database technologies such as Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL)
Nice to have
- Demonstrated experience in modern enterprise data integration platforms such as Informatica
- AWS cloud services: S3, Lambda, Glue and Kinesis and API Gateway, EC2, EMR, RDS, Redshift and Kinesis
- Good understanding of Data Architecture approaches
- Experience in designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming, Stream sets and similar cloud native technologies.
- Experience with implementation of operations concerns for a data platform such as monitoring, security, and scalability
- Experience working in DevOps, Agile, Scrum, Continuous Delivery and/or Rapid Application Development environments
- Building mock and proof-of-concepts across different capabilities/tool sets exposure
- Experience working with structured, semi-structured, and unstructured data, extracting information, and identifying linkages across disparate data sets
We are actively seeking a self-motivated Data Engineer with expertise in Azure cloud and Databricks, with a thorough understanding of Delta Lake and Lake-house Architecture. The ideal candidate should excel in developing scalable data solutions, crafting platform tools, and integrating systems, while demonstrating proficiency in cloud-native database solutions and distributed data processing.
Key Responsibilities:
- Contribute to the development and upkeep of a scalable data platform, incorporating tools and frameworks that leverage Azure and Databricks capabilities.
- Exhibit proficiency in various RDBMS databases such as MySQL and SQL-Server, emphasizing their integration in applications and pipeline development.
- Design and maintain high-caliber code, including data pipelines and applications, utilizing Python, Scala, and PHP.
- Implement effective data processing solutions via Apache Spark, optimizing Spark applications for large-scale data handling.
- Optimize data storage using formats like Parquet and Delta Lake to ensure efficient data accessibility and reliable performance.
- Demonstrate understanding of Hive Metastore, Unity Catalog Metastore, and the operational dynamics of external tables.
- Collaborate with diverse teams to convert business requirements into precise technical specifications.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related discipline.
- Demonstrated hands-on experience with Azure cloud services and Databricks.
- Proficient programming skills in Python, Scala, and PHP.
- In-depth knowledge of SQL, NoSQL databases, and data warehousing principles.
- Familiarity with distributed data processing and external table management.
- Insight into enterprise data solutions for PIM, CDP, MDM, and ERP applications.
- Exceptional problem-solving acumen and meticulous attention to detail.
Additional Qualifications :
- Acquaintance with data security and privacy standards.
- Experience in CI/CD pipelines and version control systems, notably Git.
- Familiarity with Agile methodologies and DevOps practices.
- Competence in technical writing for comprehensive documentation.
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Job Description
Responsibilities:
- Collaborate with stakeholders to understand business objectives and requirements for AI/ML projects.
- Conduct research and stay up-to-date with the latest AI/ML algorithms, techniques, and frameworks.
- Design and develop machine learning models, algorithms, and data pipelines.
- Collect, preprocess, and clean large datasets to ensure data quality and reliability.
- Train, evaluate, and optimize machine learning models using appropriate evaluation metrics.
- Implement and deploy AI/ML models into production environments.
- Monitor model performance and propose enhancements or updates as needed.
- Collaborate with software engineers to integrate AI/ML capabilities into existing software systems.
- Perform data analysis and visualization to derive actionable insights.
- Stay informed about emerging trends and advancements in the field of AI/ML and apply them to improve existing solutions.
Strong experience in Apache pyspark is must
Requirements:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience of 3-5 years as an AI/ML Engineer or a similar role.
- Strong knowledge of machine learning algorithms, deep learning frameworks, and data science concepts.
- Proficiency in programming languages such as Python, Java, or C++.
- Experience with popular AI/ML libraries and frameworks, such as TensorFlow, Keras, PyTorch, or scikit-learn.
- Familiarity with cloud platforms, such as AWS, Azure, or GCP, and their AI/ML services.
- Solid understanding of data preprocessing, feature engineering, and model evaluation techniques.
- Experience in deploying and scaling machine learning models in production environments.
- Strong problem-solving skills and ability to work on multiple projects simultaneously.
- Excellent communication and teamwork skills.
Preferred Skills:
- Experience with natural language processing (NLP) techniques and tools.
- Familiarity with big data technologies, such as Hadoop, Spark, or Hive.
- Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes.
- Understanding of DevOps practices for AI/ML model deployment
-Apache ,Pyspark
Data Scientist – Program Embedded
Job Description:
We are seeking a highly skilled and motivated senior data scientist to support a big data program. The successful candidate will play a pivotal role in supporting multiple projects in this program covering traditional tasks from revenue management, demand forecasting, improving customer experience to testing/using new tools/platforms such as Copilot Fabric for different purpose. The expected candidate would have deep expertise in machine learning methodology and applications. And he/she should have completed multiple large scale data science projects (full cycle from ideation to BAU). Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. This is a data science role directly embedded into the program/projects, stake holder management and collaborations with patterner are crucial to the success on this role (on top of the deep expertise).
What we are looking for:
- Highly efficient in Python/Pyspark/R.
- Understand MLOps concepts, working experience in product industrialization (from Data Science point of view). Experience in building product for live deployment, and continuous development and continuous integration.
- Familiar with cloud platforms such as Azure, GCP, and the data management systems on such platform. Familiar with Databricks and product deployment on Databricks.
- Experience in ML projects involving techniques: Regression, Time Series, Clustering, Classification, Dimension Reduction, Anomaly detection with traditional ML approaches and DL approaches.
- Solid background in statistics, probability distributions, A/B testing validation, univariate/multivariate analysis, hypothesis test for different purpose, data augmentation etc.
- Familiar with designing testing framework for different modelling practice/projects based on business needs.
- Exposure to Gen AI tools and enthusiastic about experimenting and have new ideas on what can be done.
- If they have improved an internal company process using an AI tool, that would be great (e.g. process simplification, manual task automation, auto emails)
- Ideally, 10+ years of experience, and have been on independent business facing roles.
- CPG or retail as a data scientist would be nice, but not number one priority, especially for those who have navigated through multiple industries.
- Being proactive and collaborative would be essential.
Some projects examples within the program:
- Test new tools/platforms such as Copilo, Fabric for commercial reporting. Testing, validation and build trust.
- Building algorithms for predicting trend in category, consumptions to support dashboards.
- Revenue Growth Management, create/understand the algorithms behind the tools (can be built by 3rd parties) we need to maintain or choose to improve. Able to prioritize and build product roadmap. Able to design new solutions and articulate/quantify the limitation of the solutions.
- Demand forecasting, create localized forecasts to improve in store availability. Proper model monitoring for early detection of potential issues in the forecast focusing particularly on improving the end user experience.
5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.
2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.
Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.
Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.
Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.
Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.
Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.
Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.
Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.
Experience or exposure to design and development using Full Stack tools.
Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.
Bachelor's or master's degree in computer science or related field.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Title:- Lead Data Engineer
Experience: 10+y
Budget: 32-36 LPA
Location: Bangalore
Work of Mode: Work from office
Primary Skills: Data Bricks, Spark, Pyspark,Sql, Python, AWS
Qualification: Any Engineering degree
Roles and Responsibilities:
• 8 - 10+ years’ experience in developing scalable Big Data applications or solutions on
distributed platforms.
• Able to partner with others in solving complex problems by taking a broad
perspective to identify.
• innovative solutions.
• Strong skills building positive relationships across Product and Engineering.
• Able to influence and communicate effectively, both verbally and written, with team
members and business stakeholders
• Able to quickly pick up new programming languages, technologies, and frameworks.
• Experience working in Agile and Scrum development process.
• Experience working in a fast-paced, results-oriented environment.
• Experience in Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2,
IAM etc.
• Experience working with Data Warehousing tools, including SQL database, Presto,
and Snowflake
• Experience architecting data product in Streaming, Serverless and Microservices
Architecture and platform.
• Experience working with Data platforms, including EMR, Airflow, Databricks (Data
Engineering & Delta
• Lake components, and Lakehouse Medallion architecture), etc.
• Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for
Managed Spark jobs, build Docker images, etc.
• Experience working with distributed technology tools, including Spark, Python, Scala
• Working knowledge of Data warehousing, Data modelling, Governance and Data
Architecture
• Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite
etc.
• Demonstrated experience in learning new technologies and skills.
• Bachelor’s degree in computer science, Information Systems, Business, or other
relevant subject area
A LEADING US BASED MNC
Data Engineering : Senior Engineer / Manager
As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.
Must Have skills :
1. GCP
2. Spark streaming : Live data streaming experience is desired.
3. Any 1 coding language: Java/Pyhton /Scala
Skills & Experience :
- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies
- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
- Strong experience in at least of the programming language Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.
- Well-versed and working knowledge with data platform related services on GCP
- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position
Your Impact :
- Data Ingestion, Integration and Transformation
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
- Build functionality for data analytics, search and aggregation
🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
POST - SENIOR DATA ENGINEER WITH AWS
Experience : 5 years
Must-have:
• Highly skilled in Python and PySpark
• Have expertise in writing Glue jobs ETL script, AWS
• Experience in working with Kafka
• Extensive SQL DB experience – Postgres
Good-to-have:
• Experience in working with data analytics and modelling
• Hands on Experience of PowerBI visualization tool
• Knowledge and hands-on on version control system - Git Common:
• Excellent communication and presentation skills (written and verbal) to all levels
of an organization
• Should be results oriented with ability to prioritize and drive multiple initiatives to
complete work you're doing on time
• Proven ability to influence a diverse geographically dispersed group of
individuals to facilitate, moderate, and influence productive design and implementation
discussions driving towards results
Shifts - Flexible ( might have to work as per US Shift timings for meetings ).
Employment Type - Any
at hopscotch
About the role:
Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.
Here’s what will be expected out of you:
➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.
➢ Develop data pipelines that make data available across platforms.
➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.
➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.
➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.
What we want:
➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.
➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.
➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).
➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.
➢ Good understanding of orchestration tools like Airflow.
➢ Strong Python and SQL coding skills.
➢ Strong Experience in distributed systems like spark.
➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).
➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.
Note :
Product based companies, Ecommerce companies is added advantage
Looking for freelance?
We are seeking a freelance Data Engineer with 7+ years of experience
Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems
What we are looking for
We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes
- The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
- Strong in scripting languages like Python, Scala
- 5+ years of hands-on experience with one or more of these data integration/ETL tools.
- Experience building on-prem data warehousing solutions.
- Experience with designing and developing ETLs, Data Marts, Star Schema
- Designing a data warehouse solution using Synapse or Azure SQL DB
- Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
- Understanding of integration run times available in Azure.
- Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database
· The Objective:
You will play a crucial role in designing, implementing, and maintaining our data infrastructure, run tests and update the systems
· Job function and requirements
o Expert in Python, Pandas and Numpy with knowledge of Python web Framework such as Django and Flask.
o Able to integrate multiple data sources and databases into one system.
o Basic understanding of frontend technologies like HTML, CSS, JavaScript.
o Able to build data pipelines.
o Strong unit test and debugging skills.
o Understanding of fundamental design principles behind a scalable application
o Good understanding of RDBMS databases among Mysql or Postgresql.
o Able to analyze and transform raw data.
· About us
Mitibase helps companies find warm prospects every month that are most relevant, and then helps their team to act on those with automation. We do so by automatically tracking key accounts and contacts for job changes and relationships triggers and surfaces them as warm leads in your sales pipeline.
Analytics Job Description
We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will
partner closely with leaders across the organization, working together to understand the how
and why of people, team and company challenges, workflows and culture. The team is
responsible for delivering data and insights that drive decision-making, execution, and
investments for our product initiatives.
You will work cross-functionally with product, marketing, sales, engineering, finance, and our
customer-facing teams enabling them with data and narratives about the customer journey.
You’ll also work closely with other data teams, such as data engineering and product analytics,
to ensure we are creating a strong data culture at Blend that enables our cross-functional partners
to be more data-informed.
Role : DataEngineer
Please find below the JD for the DataEngineer Role..
Location: Guindy,Chennai
How you’ll contribute:
• Develop objectives and metrics, ensure priorities are data-driven, and balance short-
term and long-term goals
• Develop deep analytical insights to inform and influence product roadmaps and
business decisions and help improve the consumer experience
• Work closely with GTM and supporting operations teams to author and develop core
data sets that empower analyses
• Deeply understand the business and proactively spot risks and opportunities
• Develop dashboards and define metrics that drive key business decisions
• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,
and Workato
• Design our Analytics and Business Intelligence architecture, assessing and
implementing new technologies that fitting
• Work with our engineering teams to continually make our data pipelines and tooling
more resilient
Who you are:
• Bachelor’s degree or equivalent required from an accredited institution with a
quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist
• Must have strong SQL and data modeling skills, with experience applying skills to
thoughtfully create data models in a warehouse environment.
• A proven track record of using analysis to drive key decisions and influence change
• Strong storyteller and ability to communicate effectively with managers and
executives
• Demonstrated ability to define metrics for product areas, understand the right
questions to ask and push back on stakeholders in the face of ambiguous, complex
problems, and work with diverse teams with different goals
• A passion for documentation.
• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a
dynamic environment.
• A bias towards communication and collaboration with business and technical
stakeholders.
• Quantitative rigor and systems thinking.
• Prior startup experience is preferred, but not required.
• Interest or experience in machine learning techniques (such as clustering, decision
tree, and segmentation)
• Familiarity with a scientific computing language, such as Python, for data wrangling
and statistical analysis
• Experience with a SQL focused data transformation framework such as dbt
• Experience with a Business Intelligence Tool such as Mode/Tableau
Mandatory Skillset:
-Very Strong in SQL
-Spark OR pyspark OR Python
-Shell Scripting
Job Description:
As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:
Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.
Skills and Qualifications:
Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
Proficiency in designing and developing data pipelines and ETL processes.
Solid understanding of data modeling concepts and database design principles.
Familiarity with data integration and orchestration using Azure Data Factory.
Knowledge of data quality management and data governance practices.
Experience with performance tuning and optimization of data pipelines.
Strong problem-solving and troubleshooting skills related to data engineering.
Excellent collaboration and communication skills to work effectively in cross-functional teams.
Understanding of cloud computing principles and experience with Azure services.
We are #hiring for AWS Data Engineer expert to join our team
Job Title: AWS Data Engineer
Experience: 5 Yrs to 10Yrs
Location: Remote
Notice: Immediate or Max 20 Days
Role: Permanent Role
Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.
Job Description:
Able to develop ETL jobs.
Able to help with data curation/cleanup, data transformation, and building ETL pipelines.
Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.
Sql, Python, and Pyspark is a must.
Communication should be good
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
What we are looking for:
● 3+ years’ experience developing Data & Analytic solutions
● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark
● Experience with relational SQL
● Experience with scripting languages such as Shell, Python
● Experience with source control tools such as GitHub and related dev process
● Experience with workflow scheduling tools such as Airflow
● In-depth knowledge of scalable cloud
● Has a passion for data solutions
● Strong understanding of data structures and algorithms
● Strong understanding of solution and technical design
● Has a strong problem-solving and analytical mindset
● Experience working with Agile Teams.
● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
● Able to quickly pick up new programming languages, technologies, and frameworks
● Bachelor’s Degree in computer science
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
● Overall 8+ Years of Experience in Web Application development.
● 5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware
● 3+ Years of Designing Middleware using Node JS platform.
● good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.
● Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.
● Good Experience with TDD Driven Development and Automated Unit Testing.
● Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.
● Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).
● Experience supporting and working with cross-functional teams in a dynamic environment.
● Experience working in Agile Scrum Methodology.
● Very good Problem-Solving Skills.
● Very good learner and passion for technology.
● Excellent verbal and written communication skills in English
● Ability to communicate effectively with team members and business stakeholders
Secondary Skill Requirements:
● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
Requirements:
● Understanding our data sets and how to bring them together.
● Working with our engineering team to support custom solutions offered to the product development.
● Filling the gap between development, engineering and data ops.
● Creating, maintaining and documenting scripts to support ongoing custom solutions.
● Excellent organizational skills, including attention to precise details
● Strong multitasking skills and ability to work in a fast-paced environment
● 5+ years experience with Python to develop scripts.
● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]
● You are familiar with pulling and pushing files from SFTP and AWS S3.
● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.
● Familiarity with SQL programming to query and transform data from relational Databases.
● Familiarity to work with Linux (and Linux work environment).
● Excellent written and verbal communication skills
● Extracting, transforming, and loading data into internal databases and Hadoop
● Optimizing our new and existing data pipelines for speed and reliability
● Deploying product build and product improvements
● Documenting and managing multiple repositories of code
● Experience with SQL and NoSQL databases (Casendra, MySQL)
● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,
RedShift, Athena)
● Hands-on experience in AirFlow
● Understanding of best practices, common coding patterns and good practices around
● storing, partitioning, warehousing and indexing of data
● Experience in reading the data from Kafka topic (both live stream and offline)
● Experience in PySpark and Data frames
Responsibilities:
You’ll
● Collaborating across an agile team to continuously design, iterate, and develop big data systems.
● Extracting, transforming, and loading data into internal databases.
● Optimizing our new and existing data pipelines for speed and reliability.
● Deploying new products and product improvements.
● Documenting and managing multiple repositories of code.
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.
We are looking for a DataOps Engineer with expertise while operating a data lake. Amazon S3, Amazon EMR, and Apache Airflow for workflow management are used to build the data lake.
You have experience of building and running data lake platforms on AWS. You have exposure to operating PySpark-based ETL Jobs in Apache Airflow and Amazon EMR. Expertise in monitoring services like Amazon CloudWatch.
If you love solving problems using yo, professional services background, usual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Operate the current data lake deployed on AWS with Amazon S3, Amazon EMR, and Apache Airflow
- Debug and fix production issues in PySpark.
- Determine the RCA (Root cause analysis) for production issues.
- Collaborate with product teams for L3/L4 production issues in PySpark.
- Contribute to enhancing the ETL efficiency
- Build CloudWatch dashboards for optimizing the operational efficiencies
- Handle escalation tickets from L1 Monitoring engineers
- Assign the tickets to L1 engineers based on their expertise
What are we looking for?
- AWS data Ops engineer.
- Overall 5+ years of exp in the software industry Exp in developing architecture data applications using python or scala, Airflow, and Kafka on AWS Data platform Experience and expertise.
- Must have set up or led the project to enable Data Ops on AWS or any other cloud data platform.
- Strong data engineering experience on Cloud platform, preferably AWS.
- Experience with data pipelines designed for reuse and use parameterization.
- Experience of pipelines was designed to solve common ETL problems.
- Understanding or experience on various AWS services can be codified for enabling DataOps like Amazon EMR, Apache Airflow.
- Experience in building data pipelines using CI/CD infrastructure.
- Understanding of Infrastructure as code for DataOps ennoblement.
- Ability to work with ambiguity and create quick PoCs.
You will be preferred if
- Expertise in Amazon EMR, Apache Airflow, Terraform, CloudWatch
- Exposure to MLOps using Amazon Sagemaker is a plus.
- AWS Solutions Architect Professional or Associate Level Certificate
- AWS DevOps Professional Certificate
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
As AWS Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts, and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision-making. You are passionate about the data quality of our business metrics and the flexibility of your solution that scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Write efficient code in - PySpark, Amazon Glue
- Write SQL Queries in - Amazon Athena, Amazon Redshift
- Explore new technologies and learn new techniques to solve business problems creatively
- Collaborate with many teams - engineering and business, to build better data products and services
- Deliver the projects along with the team collaboratively and manage updates to customers on time
What are we looking for?
- 1 to 3 years of experience in Apache Spark, PySpark, Amazon Glue
- 2+ years of experience in writing ETL jobs using pySpark, and SparkSQL
- 2+ years of experience in SQL queries and stored procedures
- Have a deep understanding of all the Dataframe API with all the transformation functions supported by Spark 2.7+
You will be preferred if you have
- Prior experience in working on AWS EMR, Apache Airflow
- Certifications AWS Certified Big Data – Specialty OR Cloudera Certified Big Data Engineer OR Hortonworks Certified Big Data Engineer
- Understanding of DataOps Engineering
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
Job Responsibilities:
- Identify valuable data sources and automate collection processes
- Undertake preprocessing of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns
- Helping develop reports and analysis.
- Present information using data visualization techniques.
- Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems.
- Evaluating changes and updates to source production systems.
- Develop, implement, and maintain leading-edge analytic systems, taking complicated problems and building simple frameworks
- Providing technical expertise in data storage structures, data mining, and data cleansing.
- Propose solutions and strategies to business challenges
Desired Skills and Experience:
- At least 1 year of experience in Data Analysis
- Complete understanding of Operations Research, Data Modelling, ML, and AI concepts.
- Knowledge of Python is mandatory, familiarity with MySQL, SQL, Scala, Java or C++ is an asset
- Experience using visualization tools (e.g. Jupyter Notebook) and data frameworks (e.g. Hadoop)
- Analytical mind and business acumen
- Strong math skills (e.g. statistics, algebra)
- Problem-solving aptitude
- Excellent communication and presentation skills.
- Bachelor’s / Master's Degree in Computer Science, Engineering, Data Science or other quantitative or relevant field is preferred
one of the world's leading multinational investment bank
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with
For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
at Altimetrik
Bigdata with cloud:
Experience : 5-10 years
Location : Hyderabad/Chennai
Notice period : 15-20 days Max
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
at Altimetrik
-Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight.
-Experience in developing lambda functions with AWS Lambda.
-
Expertise with Spark/PySpark
– Candidate should be hands on with PySpark code and should be able to do transformations with Spark
-Should be able to code in Python and Scala.
-
Snowflake experience will be a plus
at Altimetrik
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus
at Altimetrik
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
Altimetrik
Big data Developer
Exp: 3yrs to 7 yrs.
Job Location: Hyderabad
Notice: Immediate / within 30 days
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
Data Engineer
- High Skilled and proficient on Azure Data Engineering Tech stacks (ADF, Databricks) - Should be well experienced in design and development of Big data integration platform (Kafka, Hadoop). - Highly skilled and experienced in building medium to complex data integration pipelines for Data at Rest and streaming data using Spark. - Strong knowledge in R/Python. - Advanced proficiency in solution design and implementation through Azure Data Lake, SQL and NoSQL Databases. - Strong in Data Warehousing concepts - Expertise in SQL, SQL tuning, Data Management (Data Security), schema design, Python and ETL processes - Highly Motivated, Self-Starter and quick learner - Must have Good knowledge on Data modelling and understating of Data analytics - Exposure to Statistical procedures, Experiments and Machine Learning techniques is an added advantage. - Experience in leading small team of 6/7 Data Engineers. - Excellent written and verbal communication skills
|
- Mandatory - Hands on experience in Python and PySpark.
- Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).
- Worked on optimizing spark jobs that processes huge volumes of data.
- Hands on experience in version control tools like Git.
- Worked on Amazon’s Analytics services like Amazon EMR, Lambda function etc
- Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.
- Experience/knowledge of bash/shell scripting will be a plus.
- Experience in working with fixed width, delimited , multi record file formats etc.
- Hands on experience in tools like Jenkins to build, test and deploy the applications
- Awareness of Devops concepts and be able to work in an automated release pipeline environment.
- Excellent debugging skills.
Top Management Consulting Company
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
Desirable
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
Hiring - Python Developer Freelance Consultant (WFH-Remote)
Greetings from Deltacubes Technology!!
Skillset Required:
Python
Pyspark
AWS
Scala
Experience:
5+ years
Thanks
Bavithra
consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry
- Data Engineer
Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
- Experience and expertise in Python Development and its different libraries like Pyspark, pandas, NumPy
- Expertise in ADF, Databricks.
- Creating and maintaining data interfaces across a number of different protocols (file, API.).
- Creating and maintaining internal business process solutions to keep our corporate system data in sync and reduce manual processes where appropriate.
- Creating and maintaining monitoring and alerting workflows to improve system transparency.
- Facilitate the development of our Azure cloud infrastructure relative to Data and Application systems.
- Design and lead development of our data infrastructure including data warehouses, data marts, and operational data stores.
- Experience in using Azure services such as ADLS Gen 2, Azure Functions, Azure messaging services, Azure SQL Server, Azure KeyVault, Azure Cognitive services etc.