


Job Description:
As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:
Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.
Skills and Qualifications:
Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
Proficiency in designing and developing data pipelines and ETL processes.
Solid understanding of data modeling concepts and database design principles.
Familiarity with data integration and orchestration using Azure Data Factory.
Knowledge of data quality management and data governance practices.
Experience with performance tuning and optimization of data pipelines.
Strong problem-solving and troubleshooting skills related to data engineering.
Excellent collaboration and communication skills to work effectively in cross-functional teams.
Understanding of cloud computing principles and experience with Azure services.

About Epik Solutions
About
Similar jobs


Responsibilities:
- Coordinating with development teams to determine application requirements.
- Writing scalable code using Python programming language.
- Testing and debugging applications.
- Developing back-end components.
- Integrating user-facing elements using server-side logic.
- Assessing and prioritizing client feature requests.
- Integrating data storage solutions.
- Coordinating with front-end developers
- Developing digital tools to monitor online traffic.
- Performing all phases of software engineering including requirements analysis, application design, and code development and testing.
- Designing and implementing product features in collaboration with business and IT stakeholders.
- Must be able to contribute to tally automation.
Skills required:
- Web development using HTML, CSS, JS, good team player, agile delivery, application deployment on cloud using docker/kubernetes containers.
- Should be able analyze the requirement, develop the scripts/POCs
- Should have knowledge on deployments and documentation of the deliverables
- Experience working on Linux environments
- Experience working on Python Libraries
- Hands on Experience on version control tool
- Experience on SQL data base.
- Expert knowledge of Python and related frameworks including Django and Flask.
- A deep understanding and multi-process architecture and the threading limitations of Python.
- Must have experience on MVC framework
- Ability to collaborate on projects and work independently when required.


We are seeking a highly skilled Team Lead – MERN Stack Developer to lead the development and delivery of scalable, maintainable, and high-performance applications using MongoDB, Express.js, React.js, and Node.js. The ideal candidate will have strong technical expertise, leadership skills, and a passion for building innovative solutions. If you have experience in microservices, micro-frontends, message queues, Redis, CI/CD, and best practices for large-scale applications, we’d love to have you on our team.
Key Responsibilities
Technical Leadership & Team Management
- Lead and mentor a team of developers, ensuring smooth collaboration, technical growth, and high-quality deliverables.
- Assign tasks, set priorities, track progress, and ensure project deadlines are met.
- Conduct code reviews to maintain coding standards, ensure maintainability, and improve code quality.
- Promote a culture of innovation, continuous learning, and problem-solving.
- Architect and implement microservices, micro-frontends, and modular micro-apps for scalable and maintainable solutions.
- Collaborate with stakeholders to translate business requirements into technical designs and implementation plans.
- Optimize performance and scalability for large-scale applications with dynamic data flows.
Backend Development (Node.js & Express.js)
- Design and implement secure, scalable backend structures and APIs.
- Develop reusable tools and libraries to support a microservices architecture.
- Handle error management and debugging for seamless application functionality.
- Work with message queues like RabbitMQ, Kafka, or Redis for event-driven communication.
- Use tools like New Relic or Prometheus to monitor and debug production systems.
- Solve challenging technical problems with smart, scalable solutions.
Database Management (MongoDB)
- Design efficient and scalable database schemas for large-scale projects.
- Optimize database performance using indexing, aggregation, and advanced querying techniques.
- Leverage MongoDB features like sharding and replication to handle large datasets efficiently.
Frontend Development (React.js & JavaScript)
- Build and optimize reusable, high-performance React.js components and dynamic user interfaces.
- Develop robust solutions for complex UI interactions, including drag-and-drop functionality.
- Implement efficient state management using Redux, Context API, or React Query.
- Optimize web performance, including DOM manipulation, memory management, and responsiveness.
- Ensure applications adhere to WCAG accessibility standards and provide excellent user experiences.
Best Practices & Continuous Improvement
- Define and enforce coding standards, workflows, and best practices for high-quality development.
- Develop and maintain CI/CD pipelines for automated testing and deployment.
- Write secure and robust code to protect against vulnerabilities like XSS and CSRF.
- Regularly monitor and optimize application performance, security, and scalability.
- Stay updated on the latest trends in MERN stack technologies and software development practices.
- Collaborate with DevOps teams for containerized deployments using Docker and Kubernetes.


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.
● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.


We are working on innovative solutions in the intersection of Internet of Things and Big Data Analytics. Our solution, AutoWiz is a Platform-as-a-Service that enables insightful connected vehicle experiences. AutoWiz Platform is a scalable and versatile vehicle data analytics platform for companies in Automotive, Mobility, Motor Insurance and Logistics domain to offer differentiated solutions based on vehicle generated data.
Based on AutoWiz Platform, We offer Telematics and mobility solutions and Apps. AutoWiz connects vehicles to the AutoWiz cloud where AutoWiz develops insights that lead to better ownership experience and decisions across lifecycle of vehicles.
See more information at http://www.autowiz.in" target="_blank">www.autowiz.in
Position is open for skilled iOS mobile App developer to develop and enhance the AutoWiz Mobile App and related Automotive IoT Apps.
Responsibilities
- Understand the UI/UX designs and translate them into a fully functional iOS app.
- Should have hands-on (2+ years), in-depth experience of developing Apps for iPhones and iPads, interfacing with the backend server using APIs and deploying the App on App Store.
- Ability to work in agile mode with evolving requirements.
- Ensure the performance, quality and responsiveness of the application.
- Strong knowledge of different versions of iOS, dealing with different screen sizes.
- Experience with push notifications, Local DB, using google maps API, interfacing with additional sensors, analytics shall be a plus.
Essential qualifications
- A Bachelor’s degree in Engineering or Master’s in Computer Science/Information sciences.
- 2+ Years of relevant experience in iOS Mobile App Development
Job Description:
Minimum 3+ years of experience
Java, OOPS, Data structure, Design Patterns, Multithreading, Spring, Hibernate, Java Script based technology, knowledge of J2EE, PL - SQL..
Core Java Topics including Language Features, Architecture.
OOP Concepts in depth with real time scenarios
Understandings of special Keywords and their applicability as well as advantages and
disadvantages - Like static, final, this, super etc. static should be in depth
Singleton class, Serialization, Cloning, Anonymous Inner class concepts.
In depth concept of Dynamic and static polymorphism & Multiple Inheritance issue/ resolve.
Concepts on String, Exception Handling, Array etc.
Java SE 8 and 9 Features: Functional Inheritance, Lambda Expression, default and static method
etc.
Collection Classes in detail along with the internal data structure and working principle.
Multithreading concept. Concept on ConcurrentHashMap etc.
Analysis of Algorithm - Time Complexity – Big O Calculation
Code Optimizing techniques, Different Algorithm designs and strategies.
Design patterns in Java
Different Sorting & Searching Algo with their Time complexity analysis
Problem solving using Hackerrank or similar sites, with focus on Arrays, Linked Lists, Hash Maps,
Trees, Balanced Trees, Stacks, Queues, Strings , Big Number , Data Structures, Object Oriented
Programming
Exception Handling – Advanced, Sorting, Search, Recursion, Graph Theory.
Note: Core Java + PL/SQL (Unix will be add on)


- JavaScript, HTML ,
- SCSS/CSS/LESS skills
• - Strong on React JS (Minimum 2 years of experience is mandatory)
• - ES6
Competencies:
- Strong understanding of JavaScript
- Good understanding of web markup, including HTML5 and CSS3
- Good understanding of advanced JavaScript libraries and frameworks.
- Proficient understanding of cross-browser compatibility issues and ways to work around such issues
- Familiarity with JavaScript module loaders, such as Require.js and AMD
- Familiarity with front-end build tools, such as Grunt and Gulp.js
- Proficient understanding of code versioning tools, such as (Git / Mercurial / SVN)
- Good understanding of browser rendering behavior and performance
- Proven technical expertise with Bootstrap and Foundation.
- Familiar with Service-Oriented Architecture (SOA).
- Develop responsive design websites using frameworks like Bootstrap.
- Develop websites using JavaScript frameworks like JQuery, AngularJS and Backbone.
- Must be are eager to learn, seek out new solutions and can adapt quickly within a dynamic technical environment.
- Excellent communication and self-motivation skills.
- Ability to adapt quickly to changing priorities and unforeseen requests
- Proven interpersonal, analytical, attention to detail/strategy, and creative problem-solving skills
- Passionate about technology
- Must be able to work with minimal supervision on multiple concurrent projects
- Familiarity with Agile/Scrum software development methodologies.
Company: Torrent Power Ltd
Location: Ahmedabad (Only Ahmednagar candidates can apply)
Profiles: FICO (AM – FICO (Integration/Implementation Consultant)
Experience - 6+ years (Relevant)
Full time- Required
CTC: Upto 20Lpa
Job Description:
SAP FICO Functional Support
• Conceptualizing & Mapping of all Business Scenarios
• Functional support for all SAP FICO Modules including Product Costing
• Expertise in integration with all other SAP R3/ISU modules
• Understanding & knowledge of MDM
• Having expertise & understanding the migration activity
• SAP IS-U Developments
• Preparing Functional & Technical Specification Document
• Implementation/Execution of development logic
• Technical / Functional testing of IS-U FICO developments
• SAP Roles & Authorization
• Role creation according to requirement and assignment of Role to user

- 5+ years experience with Hyland OnBase (Content Management)
- Have knowledge on OnBase workflows
- Knowledge on Document Intake Process, Indexing etc., on OnBase
- Experience writing custom scripts to intervene on the OnBase workflows using DotNet
Process Skills:
- Hands-on in Agile Scrum methodology.
- Contribute to all phases of the development life cycle including writing well designed, testable, efficient code.
Behavioral Skills:
- Comfort with ambiguity and ability to navigate uncertainty.
- Must be capable of working independently and collaboratively.

