11+ Dataflow architecture Jobs in Pune | Dataflow architecture Job openings in Pune
Apply to 11+ Dataflow architecture Jobs in Pune on CutShort.io. Explore the latest Dataflow architecture Job opportunities across top companies like Google, Amazon & Adobe.
CANDIDATE WILL BE DEPLOYED IN A FINANCIAL CAPTIVE ORGANIZATION @ PUNE (KHARADI)
Below are the job Details :-
Experience 10 to 18 years
Mandatory skills –
- data migration,
- data flow
The ideal candidate for this role will have the below experience and qualifications:
- Experience of building a range of Services in a Cloud Service provider (ideally GCP)
- Hands-on design and development of Google Cloud Platform (GCP), across a wide range of GCP services including hands on experience of GCP storage & database technologies.
- Hands-on experience in architecting, designing or implementing solutions on GCP, K8s, and other Google technologies. Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
- Desired Skills within the GCP stack - Cloud Run, GKE, Serverless, Cloud Functions, Vision API, DLP, Data Flow, Data Fusion
- Prior experience of migrating on-prem applications to cloud environments. Knowledge and hands on experience on Stackdriver, pub-sub, VPC, Subnets, route tables, Load balancers, firewalls both for on premise and the GCP.
- Integrate, configure, deploy and manage centrally provided common cloud services (e.g. IAM, networking, logging, Operating systems, Containers.)
- Manage SDN in GCP Knowledge and experience of DevOps technologies around Continuous Integration & Delivery in GCP using Jenkins.
- Hands on experience of Terraform, Kubernetes, Docker, Stackdriver, Terraform
- Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Groovy, Scala
- Knowledge or experience in DevOps tooling such as Jenkins, Git, Ansible, Splunk, Jira or Confluence, AppD, Docker, Kubernetes
- Act as a consultant and subject matter expert for internal teams to resolve technical deployment obstacles, improve product's vision. Ensure compliance with centrally defined Security
- Financial experience is preferred
- Ability to learn new technologies and rapidly prototype newer concepts
- Top-down thinker, excellent communicator, and great problem solver
Exp:- 10 to 18 years
Location:- Pune
Candidate must have experience in below.
- GCP Data Platform
- Data Processing:- Data Flow, Data Prep, Data Fusion
- Data Storage:- Big Query, Cloud Sql,
- Pub Sub, GCS Bucket
Strong Lead – User Research & Analyst profile (behavioural/user/product/ux analytics)
Mandatory (Experience 1): Must have 10+ years of experience in Behavioral Data Analytics, User Research, or Product Insights, driving data-informed decision-making for B2C digital products (web and app).
Mandatory (Experience 2): Must have 6 months+ experience in analyzing user journeys, clickstream, and behavioral data using tools such as Google Analytics, Mixpanel, CleverTap, Firebase, or Amplitude.
Mandatory (Experience 3): Experience in leading cross-functional user research and analytics initiatives in collaboration with Product, Design, Engineering, and Business teams to translate behavioral insights into actionable strategies.
Mandatory (Skills 1): Strong expertise in A/B testing and experimentation, including hypothesis design, execution, statistical validation, and impact interpretation.
Mandatory (Skills 2): Ability to identify behavioral patterns, funnel drop-offs, engagement trends, and user journey anomalies using large datasets and mixed-method analysis.
Mandatory (Skills 3): Hands-on proficiency in SQL, Excel, and data visualization/storytelling tools such as Tableau, Power BI, or Looker for executive reporting and dashboard creation.
Mandatory (Skills 4): Deep understanding of UX principles, customer journey mapping, and product experience design, with experience integrating qualitative and quantitative insights.
Mandatory (Company): B2C product organizations (fintech, e-commerce, edtech, or consumer platforms) with large-scale user datasets and analytics maturity.
Mandatory (Note): Don’t want data analysts; looking for strategic behavioral insight leaders or research-driven analytics professionals focused on user behavior and product decision-making.
Extensive experience in Appian BPM application development
· Knowledge of Appian architecture and its objects best practices
· Participate in analysis, design, and new development of Appian based applications
· Team leadership and provide technical leadership to Scrum teams
· Must be able to multi-task, work in a fast-paced environment and ability to resolve problems faced
by team
· Build applications: interfaces, process flows, expressions, data types, sites, integrations, etc.
· Proficient with SQL queries and with accessing data present in DB tables and views
· Experience in Analysis, Designing process models, Records, Reports, SAIL, forms, gateways, smart
services, integration services and web services
· Experience working with different Appian Object types, query rules, constant rules and expression
rules
Qualifications
· At least 6 years of experience in Implementing BPM solutions using Appian 19.x or higher
· Over 8 years in Implementing IT solutions using BPM or integration technologies
· Certification Mandatory- L1 and L2 a
· Experience in Scrum/Agile methodologies with Enterprise level application development projects
· Good understanding of database concepts and strong working knowledge any one of the major
databases e g Oracle SQL Server MySQL
Additional information
Skills Required
· Appian BPM application development on version 19.x or higher
· Experience of integrations using web services e g XML REST WSDL SOAP API JDBC JMS
· Good leadership skills and the ability to lead a team of software engineers technically
· Experience working in Agile Scrum teams
· Good Communication skills.
Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Experience - 2 to 6 Years
Work Location - Pune
Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.
Mandatory Skills:
- Strong in SQL development
- Hands-on at least one scripting language - preferably shell scripting
- Development experience in Data warehouse projects
Opportunities:
- Selected candidates will be provided learning opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
- 2 - 6 years of software development experience
- Good grasp on programming fundamentals including OOP, Design Patterns and Data Structures
- Excellent analytical, logical and problem-solving skills
- Software Development Engineer
- Good understanding of complexities involved in designing/developing large scale systems
- Strong system design skills
- Experience in technologies like Elasticsearch, Redis, Kafka etc
- Good knowledge of relational and NoSQL databases
- Familiarity with common machine learning algorithms. In-depth knowledge is a plus
- Experience of working with big data technologies like Hadoop, Spark, Hive is a big plus
- Ability to understand business requirements and take ownership of the work
- Exhibit passion and enthusiasm for building and maintaining large scale platforms
Who are we looking for?
A passionate developer
What’s non-negotiable?
- Strong working knowledge of OOPS
- Functional programming principles
- Strong believer of Clean Code practices
- An advocate of TDD, BDD, SOLID, CI/CD, Lean development
You’ll easily settle in if:
You come from a strong Java/J2EE background with web application frameworks like Spring Boot or Drop wizard
You have ample work experience in Caching, Multi-Threading, Asynchronous APIs, Exception Management and use of collections, Mocking, Unit testing tools like Junit & TestNG"
You are fluent with version control tools like GIT and Bitbucket
Experience with Continuous Integration, Continuous Deployment, Static Code Analysis, Jenkins and SonarQube
Willingness to take ownership of the technical solution and ensure technical expectations of deliverables are met.
Exposure to AWS/Azure cloud and containerization
Have a good Understanding of Distributed Application Architecture.
You ‘ll love:
Awesome opportunity to work with Micro Services architecture getting shipped on Cloud.
Experience in working with automated build deploy powered by Code Analysis, Automated Tests, Functional and Nonfunctional analysis, Blue Green deployment and much more
First-hand experience on broader enterprise application concerns like Message Bus, Queues, Caches, Concurrency and Parallelization
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server






