13+ Data modeling Jobs in Pune | Data modeling Job openings in Pune
Apply to 13+ Data modeling Jobs in Pune on CutShort.io. Explore the latest Data modeling Job opportunities across top companies like Google, Amazon & Adobe.
at Tech Prescient
Job Title: - Power BI Analyst with SQL Expertise
Experience level : 5 years
Location - Pune / Remote Job
Overview:
We are looking for a talented Power BI Analyst with strong analytical skills and proficiency in SQL to join our team. This role involves developing insightful dashboards and reports using Power BI, conducting in-depth data analysis, and translating data insights into actionable business recommendations. The ideal candidate will have a knack for turning data into stories and possess a solid foundation in SQL for efficient querying and data manipulation.
Must-Have Skills:
- Power BI Proficiency: Expertise in designing and developing complex Power BI dashboards and reports.
- SQL Querying: Strong SQL skills for data extraction, manipulation, and complex query building.
- Analytical Skills: Ability to analyze large datasets to identify trends, patterns, and insights that support strategic decisions.
- Data Modeling: Experience with data modeling techniques to ensure optimal performance in Power BI reports.
- Communication: Ability to communicate insights effectively to non-technical stakeholders.
Good-to-Have Skills:
- DAX (Data Analysis Expressions): Experience with DAX for advanced calculations in Power BI.
- Experience with BI Tools: Familiarity with other BI tools such as Tableau, QlikView, or similar.
- Python or R: Knowledge of Python or R for data analysis tasks outside of Power BI.
- Cloud Platforms: Familiarity with data warehousing solutions on AWS, Azure, or GCP.
Key Responsibilities and Duties:
- Design, develop, and maintain Power BI reports and dashboards that meet business needs.
- Write, optimize, and troubleshoot complex SQL queries for data extraction and transformation.
- Perform data analysis to uncover trends and insights, supporting strategic and operational decisions.
- Collaborate with stakeholders to gather and refine reporting requirements.
- Ensure data integrity, accuracy, and reliability in all reports and analytics.
Required Qualifications:
- Bachelor’s degree in Computer Science, Data Analytics, Statistics, or a related field.
- Proven experience in Power BI report development and SQL querying.
- Strong analytical skills with a track record of interpreting data to support business decisions.
Preferred Qualifications:
- Master’s degree in Data Science, Business Analytics, or related fields.
- Certifications in Power BI, SQL, or data analytics.
- Experience with DAX, Python, or R.
Other Competencies:
- Problem-Solving: Ability to approach data challenges with innovative solutions.
- Attention to Detail: Keen eye for detail to ensure data accuracy and clarity in reports.
- Collaboration: Works well in a team environment, especially with cross-functional stakeholders.
Skills & Experience:
❖ At least 5+ years of experience as a Data Engineer
❖ Hands-on and in-depth experience with Star / Snowflake schema design, data modeling,
data pipelining and MLOps.
❖ Experience in Data Warehouse technologies (e.g. Snowflake, AWS Redshift, etc)
❖ Experience in AWS data pipelines (Lambda, AWS glue, Step functions, etc)
❖ Proficient in SQL
❖ At least one major programming language (Python / Java)
❖ Experience with Data Analysis Tools such as Looker or Tableau
❖ Experience with Pandas, Numpy, Scikit-learn, and Jupyter notebooks preferred
❖ Familiarity with Git, GitHub, and JIRA.
❖ Ability to locate & resolve data quality issues
❖ Ability to demonstrate end to ed data platform support experience
Other Skills:
❖ Individual contributor
❖ Hands-on with the programming
❖ Strong analytical and problem solving skills with meticulous attention to detail
❖ A positive mindset and can-do attitude
❖ To be a great team player
❖ To have an eye for detail
❖ Looking for opportunities to simplify, automate tasks, and build reusable components.
❖ Ability to judge suitability of new technologies for solving business problems
❖ Build strong relationships with analysts, business, and engineering stakeholders
❖ Task Prioritization
❖ Familiar with agile methodologies.
❖ Fintech or Financial services industry experience
❖ Eagerness to learn, about the Private Equity/Venture Capital ecosystem and associated
secondary market
Responsibilities:
o Design, develop and maintain a data platform that is accurate, secure, available, and fast.
o Engineer efficient, adaptable, and scalable data pipelines to process data.
o Integrate and maintain a variety of data sources: different databases, APIs, SAASs, files, logs,
events, etc.
o Create standardized datasets to service a wide variety of use cases.
o Develop subject-matter expertise in tables, systems, and processes.
o Partner with product and engineering to ensure product changes integrate well with the
data platform.
o Partner with diverse stakeholder teams, understand their challenges and empower them
with data solutions to meet their goals.
o Perform data quality on data sources and automate and maintain a quality control
capability.
at Wissen Technology
Responsibilities:
- Design, implement, and maintain scalable and reliable database solutions on the AWS platform.
- Architect, deploy, and optimize DynamoDB databases for performance, scalability, and cost-efficiency.
- Configure and manage AWS OpenSearch (formerly Amazon Elasticsearch Service) clusters for real-time search and analytics capabilities.
- Design and implement data processing and analytics solutions using AWS EMR (Elastic MapReduce) for large-scale data processing tasks.
- Collaborate with cross-functional teams to gather requirements, design database solutions, and implement best practices.
- Perform performance tuning, monitoring, and troubleshooting of database systems to ensure high availability and performance.
- Develop and maintain documentation, including architecture diagrams, configurations, and operational procedures.
- Stay current with the latest AWS services, database technologies, and industry trends to provide recommendations for continuous improvement.
- Participate in the evaluation and selection of new technologies, tools, and frameworks to enhance database capabilities.
- Provide guidance and mentorship to junior team members, fostering knowledge sharing and skill development.
Requirements:
- Bachelor’s degree in computer science, Information Technology, or related field.
- Proven experience as an AWS Architect or similar role, with a focus on database technologies.
- Hands-on experience designing, implementing, and optimizing DynamoDB databases in production environments.
- In-depth knowledge of AWS OpenSearch (Elasticsearch) and experience configuring and managing clusters for search and analytics use cases.
- Proficiency in working with AWS EMR (Elastic MapReduce) for big data processing and analytics.
- Strong understanding of database concepts, data modelling, indexing, and query optimization.
- Experience with AWS services such as S3, EC2, RDS, Redshift, Lambda, and CloudFormation.
- Excellent problem-solving skills and the ability to troubleshoot complex database issues.
- Solid understanding of cloud security best practices and experience implementing security controls in AWS environments.
- Strong communication and collaboration skills with the ability to work effectively in a team environment.
- AWS certifications such as AWS Certified Solutions Architect, AWS Certified Database - Specialty, or equivalent certifications are a plus.
What’s in it for you?
Opportunity To Unlock Your Creativity
Think of all the times you were held back from trying new ideas because you were boxed in by bureaucratic legacy processes or old school tactics. Having a growth mindset is deeply ingrained into our company culture since day 1 so Fictiv is an environment where you have the creative liberty and support of the team to try big bold ideas to achieve our sales and customer goals.
Opportunity To Grow Your Career
At Fictiv, you'll be surrounded by supportive teammates who will push you to be your best through their curiosity and passion.
Opportunity To Unlock Your Creativity
Think of all the times you were held back from trying new ideas because you were boxed in by bureaucratic legacy processes or old school tactics. Having a growth mindset is deeply ingrained into our company culture since day 1 so Fictiv is an environment where you have the creative liberty and support of the team to try big bold ideas to achieve our sales and customer goals.
Impact In This Role
Excellent problem solving, decision-making and critical thinking skills.
Collaborative, a team player.
Excellent verbal and written communication skills.
Exhibits initiative, integrity and empathy.
Enjoy working with a diverse group of people in multiple regions.
Comfortable not knowing answers, but resourceful and able to resolve issues.
Self-starter; comfortable with ambiguity, asking questions and constantly learning.
Customer service mentality; advocates for another person's point of view.
Methodical and thorough in written documentation and communication.
Culture oriented; wants to work with people rather than in isolation.
You will report to the Director of IT Engineering
What You’ll Be Doing
- Interface with Business Analysts and Stakeholders to understand & clarify requirements
- Develop technical design for solutions Development
- Implement high quality, scalable solutions following best practices, including configuration and code.
- Deploy solutions and code using automated deployment tools
- Take ownership of technical deliverables, ensure that quality work is completed, fully tested, delivered on time.
- Conduct code reviews, optimization, and refactoring to minimize technical debt within Salesforce implementations.
- Collaborate with cross-functional teams to integrate Salesforce with other systems and platforms, ensuring seamless data flow and system interoperability.
- Identify opportunities for process improvements, mentor and support other developers/team members as needed.
- Stay updated on new Salesforce features and functionalities and provide recommendations for process improvements.
Desired Traits
- 8-10 years of experience in Salesforce development
- Proven experience in developing Salesforce solutions with a deep understanding of Apex, Visualforce, Lightning Web Components, and Salesforce APIs.
- Have worked in Salesforce CPQ, Sales/Manufacturing Cloud, Case Management
- Experienced in designing and implementing custom solutions that align with business needs.
- Strong knowledge of Salesforce data modeling, reporting, and database design.
- Demonstrated experience in building and maintaining integrations between Salesforce and external applications.
- Strong unit testing, functional testing and debugging skills
- Strong understanding of best practices
- Active Salesforce Certifications are desirable.
- Experience in Mulesoft is a plus
- Excellent communication skills and the ability to translate complex technical requirements into actionable solutions.
Interested in learning more? We look forward to hearing from you soon.
As an engineer, you will help with the implementation, and launch of many key product features. You will get an opportunity to work on a wide range of technologies (including Spring, AWS Elastic Search, Lambda, ECS, Redis, Spark, Kafka etc.) and apply new technologies for solving problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with skilled and motivated engineers who are already contributing to building high-scale and high-available systems.
If you are looking for an opportunity to work on leading technologies and would like to build product technology that can cater millions of customers inclined towards providing them the best experience, and relish large ownership and diverse technologies, join our team today!
What You'll Do:
- Creating detailed design, working on development and performing code reviews.
- Implementing validation and support activities in line with architecture requirements
- Help the team translate the business requirements into R&D tasks and manage the roadmap of the R&D tasks.
- Designing, building, and implementation of the product; participating in requirements elicitation, validation of architecture, creation and review of high and low level design, assigning and reviewing tasks for product implementation.
- Work closely with product managers, UX designers and end users and integrating software components into a fully functional system
- Ownership of product/feature end-to-end for all phases from the development to the production.
- Ensuring the developed features are scalable and highly available with no quality concerns.
- Work closely with senior engineers for refining the and implementation.
- Management and execution against project plans and delivery commitments.
- Assist directly and indirectly in the continual hiring and development of technical talent.
- Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts.
The ideal candidate is a passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end.
What You'll Need:
- A Bachelor's degree in Computer Science or related technical discipline.
- 2-3+ years of Software Development experience with proficiency in Java or equivalent object-oriented languages, coupled with design and SOA
- Fluency with Java, and Spring is good.
- Experience in JEE applications and frameworks like struts, spring, mybatis, maven, gradle
- Strong knowledge of Data Structures, Algorithms and CS fundamentals.
- Experience in at least one shell scripting language, SQL, SQL Server, PostgreSQL and data modeling skills
- Excellent analytical and reasoning skills
- Ability to learn new domains and deliver output
- Hands on Experience with the core AWS services
- Experience working with CI/CD tools (Jenkins, Spinnaker, Nexus, GitLab, TeamCity, GoCD, etc.)
- Expertise in at least one of the following:
- Kafka, ZeroMQ, AWS SNS/SQS, or equivalent streaming technology
- Distributed cache/in memory data grids like Redis, Hazelcast, Ignite, or Memcached
- Distributed column store databases like Snowflake, Cassandra, or HBase
- Spark, Flink, Beam, or equivalent streaming data processing frameworks
- Proficient with writing and reviewing Python and other object-oriented language(s) are a plus
- Experience building automations and CICD pipelines (integration, testing, deployment)
- Experience with Kubernetes would be a plus.
- Good understanding of working with distributed teams using Agile: Scrum, Kanban
- Strong interpersonal skills as well as excellent written and verbal communication skills
• Attention to detail and quality, and the ability to work well in and across teams
JOB DESCRIPTION:. THE IDEAL CANDIDATE WILL:
• Ensure new features and subject areas are modelled to integrate with existing structures and provide a consistent view. Develop and maintain documentation of the data architecture, data flow and data models of the data warehouse appropriate for various audiences. Provide direction on adoption of Cloud technologies (Snowflake) and industry best practices in the field of data warehouse architecture and modelling.
• Providing technical leadership to large enterprise scale projects. You will also be responsible for preparing estimates and defining technical solutions to proposals (RFPs). This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project Roles & Responsibilities.
ELIGIBILITY CRITERIA: Desired Experience/Skills:
• Must have total 5+ yrs. in IT and 2+ years' experience working as a snowflake Data Architect and 4+ years in Data warehouse, ETL, BI projects.
• Must have experience at least two end to end implementation of Snowflake cloud data warehouse and 3 end to end data warehouse implementations on-premise preferably on Oracle.
• Expertise in Snowflake – data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts
• Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features
• Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns
• Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
• Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
• Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
• Experience with data security and data access controls and design
• Experience with AWS or Azure data storage and management technologies such as S3 and ADLS
• Build processes supporting data transformation, data structures, metadata, dependency and workload management
• Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
• Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
• Must have expertise in AWS or Azure Platform as a Service (PAAS)
• Certified Snowflake cloud data warehouse Architect (Desirable)
• Should be able to troubleshoot problems across infrastructure, platform and application domains.
• Must have experience of Agile development methodologies
• Strong written communication skills. Is effective and persuasive in both written and oral communication
Nice to have Skills/Qualifications:Bachelor's and/or master’s degree in computer science or equivalent experience.
• Strong communication, analytical and problem-solving skills with a high attention to detail.
About you:
• You are self-motivated, collaborative, eager to learn, and hands on
• You love trying out new apps, and find yourself coming up with ideas to improve them
• You stay ahead with all the latest trends and technologies
• You are particular about following industry best practices and have high standards regarding quality
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.
Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.
What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.
Qualifications & Experience relevant for the role
• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).
• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Company Profile:
Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.
We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.
Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.
Salary: As per company standards.
Designation: Data Engineering
Location: Pune
Experience with ETL, Data Modeling, and Data Architecture
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.
Experience with AWS cloud data lake for development of real-time or near real-time use cases
Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing
Build data pipeline frameworks to automate high-volume and real-time data delivery
Create prototypes and proof-of-concepts for iterative development.
Experience with NoSQL databases, such as DynamoDB, MongoDB etc
Create and maintain optimal data pipeline architecture,
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow
Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.
Employment Type
Full-time
Sr. Java Software Engineer:
Preferred Education & Experience:
- Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
- Well-versed in and 5+ years of hands-on designing experience in Object Oriented Design, Data Modeling, Class & Object Modeling, Microservices Architecture & Design.
- Well-versed in and 5+ years of hands-on programming experience in Core Java Programming, Advanced Java Programming, Spring Framework, Spring Boot or Micronaut Framework, Log Framework, Build & Deployment Framework, etc
. • 3+ years of hands-on experience developing Domain-Driven Microservices using libraries & frameworks such as Micronaut, Spring Boot, etc.
- 3+ years of hands-on experience developing connector frameworks Apache Camel, Akka framework, etc.
- 3+ years of hands-on experience in RBDMS & NoSQL Databases concepts and development practices (PostgreSQL, MongoDB, Elasticsearch, Amazon S3).
- 3+ years of hands-on experience developing Webservices using REST, API Gateway using Token based authentication, access management.
- 1+ years of hands-on experience developing and hosting microservices using Serverless and Container based development (AWS Lambda, Docker, Kubernetes, etc.).
- Having Knowledge & hands-on experience developing applications using Behavior Driven Development, Test Driven Development Methodologies is a Plus.
- Having Knowledge & hands-on experience in AWS Cloud Services such as IAM, Lambda, EC2, ECS, ECR, API Gateway, S3, SQS, Kinesis, CloudWatch, DynamoDB, etc. is also a Plus.
- Having Knowledge & hands-on experience in DevOps CI/CD tools such as JIRA, Git (Bitbucket/GitHub), Artifactory, etc. & Build tools such as Maven & Gradle.
- 2+ years of hands-on development experience in Java centric Developer Tools, Management & Governance, Networking and Content Delivery, Security, Identity, and Compliance, etc.
- Having Knowledge & hands-on experience in Apache Nifi, Apache Spark, Apache Flink is also a Plus. • Having Knowledge & handson experience in Python, NodeJS, Scala Programming is also a Plus. Required Experience: 5+ Years
Job Location: Remote / Pune
Open Positions: 1
• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
A global business process management company
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description :
Experience: 6+ Years
Work Location: Pune / Hyderabad
Technical Skills :
- Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
- Knowledge of database performance tuning techniques
- Rich experience in a database development
- Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
- Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
Required Candidate Profile :
- Excellent communication, interpersonal, analytical skills and strong ability to drive teams
- Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
- Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
- Stakeholder management and client engagement skills
- Strong communication skills (written and verbal)
About Us!
A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle Data warehouse Assessment & Migration Planning Product
Raven Automated Workload Conversion Product
Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.