8+ Data engineering Jobs in Hyderabad | Data engineering Job openings in Hyderabad
Apply to 8+ Data engineering Jobs in Hyderabad on CutShort.io. Explore the latest Data engineering Job opportunities across top companies like Google, Amazon & Adobe.
Role Overview
We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies.
This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.
Responsibilities
- Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
- Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
- Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
- Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
- Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
- Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.
Requirements
- 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
- Expertise in Typescript, ReactJs, and familiarity with Python.
- Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
- Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
- Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
- A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.
Why Join Us?
- Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
- Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
- Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
- Work in an environment that values innovation, leadership, and the long-term success of its employees.
AWS Data Engineer:
Job Description
-
3+ years of experience in AWS Data Engineering.
-
Design and build ETL pipelines & Data lakes to automate ingestion of structured and unstructured data
-
Experience working with AWS big data technologies (Redshift, S3, AWS Glue, Kinesis, Athena ,DMS, EMR and Lambda for Serverless ETL)
-
Should have knowledge in SQL and NoSQL programming languages.
-
Have worked on batch and real time pipelines.
-
Excellent programming and debugging skills in Scala or Python & Spark.
-
Good Experience in Data Lake formation, Apache spark, python, hands on experience in deploying the models.
-
Must have experience in Production migration Process
-
Nice to have experience with Power BI visualization tools and connectivity
Roles & Responsibilities:
-
Design, build and operationalize large scale enterprise data solutions and applications
-
Analyze, re-architect and re-platform on premise data warehouses to data platforms on AWS cloud.
-
Design and build production data pipelines from ingestion to consumption within AWS big data architecture, using Python, or Scala.
-
Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.
Job description
Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)
Primary Location : India-Pune, Hyderabad
Experience : 7 - 12 Years
Management Level: 7
Joining Time: Immediate Joiners are preferred
- Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
- Align architecture with business requirements and stabilizing the developed solution
- Ability to build prototypes to demonstrate the technical feasibility of your vision
- Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
- To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
- Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
- Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
- Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
- Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
- Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
- Deployment sophisticated analytics program of code using any of cloud application.
Perks and Benefits we Provide!
- Working with Highly Technical and Passionate, mission-driven people
- Subsidized Meals & Snacks
- Flexible Schedule
- Approachable leadership
- Access to various learning tools and programs
- Pet Friendly
- Certification Reimbursement Policy
- Check out more about us on our website below!
www.datametica.com
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server
SpringML is looking to hire a top-notch Senior Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.
RESPONSIBILITIES:
- Ability to work as a member of a team assigned to design and implement data integration solutions.
- Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
- Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
- Propose design solutions and recommend best practices for large scale data analysis
SKILLS:
- B.tech degree in computer science, mathematics or other relevant fields.
- 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
- Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
- Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka,
- Experience with Agile implementation methodologies
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
-
Azure Synapse or Azure SQL data warehouse
-
Spark on Azure is available in HD insights and data bricks
- Experience with Azure Analysis Services
- Experience in Power BI
- Experience with third-party solutions like Attunity/Stream sets, Informatica
- Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
- Capacity Planning and Performance Tuning on Azure Stack and Spark.
- Total Experience of 7-10 years and should be interested in teaching and research
- 3+ years’ experience in data engineering which includes data ingestion, preparation, provisioning, automated testing, and quality checks.
- 3+ Hands-on experience in Big Data cloud platforms like AWS and GCP, Data Lakes and Data Warehouses
- 3+ years of Big Data and Analytics Technologies. Experience in SQL, writing code in spark engine using python, scala or java Language. Experience in Spark, Scala
- Experience in designing, building, and maintaining ETL systems
- Experience in data pipeline and workflow management tools like Airflow
- Application Development background along with knowledge of Analytics libraries, opensource Natural Language Processing, statistical and big data computing libraries
- Familiarity with Visualization and Reporting Tools like Tableau, Kibana.
- Should be good at storytelling in Technology
Qualification: B.Tech / BE / M.Sc / MBA / B.Sc, Having Certifications in Big Data Technologies and Cloud platforms like AWS, Azure and GCP will be preferred
Primary Skills: Big Data + Python + Spark + Hive + Cloud Computing
Secondary Skills: NoSQL+ SQL + ETL + Scala + Tableau
Selection Process: 1 Hackathon, 1 Technical round and 1 HR round
Benefit: Free of cost training on Data Science from top notch professors