Similar jobs
Job DescriptionPosition: Sr Data Engineer – Databricks & AWS
Experience: 4 - 5 Years
Company Profile:
Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are transforming ourselves and rapidly expanding our business.
Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.
One of the top partners of Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik in 2017.
Get to know more about us on our website: http://www.exponentia.ai/ and Life @Exponentia.
Role Overview:
· A Data Engineer understands the client requirements and develops and delivers the data engineering solutions as per the scope.
· The role requires good skills in the development of solutions using various services required for data architecture on Databricks Delta Lake, streaming, AWS, ETL Development, and data modeling.
Job Responsibilities
• Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization.
• Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services.
• Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data.
• Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts and demonstrating it to stakeholders.
Technical Skills
• Has more than 2 Years of experience in developing data lakes, and datamarts on the Databricks platform.
• Proven skill sets in AWS Data Lake services such as - AWS Glue, S3, Lambda, SNS, IAM, and skills in Spark, Python, and SQL.
• Experience in Pentaho
• Good understanding of developing a data warehouse, data marts etc.
• Has a good understanding of system architectures, and design patterns and should be able to design and develop applications using these principles.
Personality Traits
• Good collaboration and communication skills
• Excellent problem-solving skills to be able to structure the right analytical solutions.
• Strong sense of teamwork, ownership, and accountability
• Analytical and conceptual thinking
• Ability to work in a fast-paced environment with tight schedules.
• Good presentation skills with the ability to convey complex ideas to peers and management.
Education:
BE / ME / MS/MCA.
Position: ETL Developer
Location: Mumbai
Exp.Level: 4+ Yrs
Required Skills:
* Strong scripting knowledge such as: Python and Shell
* Strong relational database skills especially with DB2/Sybase
* Create high quality and optimized stored procedures and queries
* Strong with scripting language such as Python and Unix / K-Shell
* Strong knowledge base of relational database performance and tuning such as: proper use of indices, database statistics/reorgs, de-normalization concepts.
* Familiar with lifecycle of a trade and flows of data in an investment banking operation is a plus.
* Experienced in Agile development process
* Java Knowledge is a big plus but not essential
* Experience in delivery of metrics / reporting in an enterprise environment (e.g. demonstrated experience in BI tools such as Business Objects, Tableau, report design & delivery) is a plus
* Experience on ETL processes and tools such as Informatica is a plus. Real time message processing experience is a big plus.
* Good team player; Integrity & ownership
make an impact by enabling innovation and growth; someone with passion for what they do and a vision for the future.
Responsibilities:
- Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
- Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
- Become an expert on data and trends, both internal and external to Kaleidofin.
- Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
- Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
- Automate scheduling and distribution of reports and support auditing and value realization.
- Partner with enterprise architects to define and ensure proposed.
- Business Intelligence solutions adhere to an enterprise reference architecture.
- Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks
Requirements:
- Experience leading development efforts through all phases of SDLC.
- 5+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
- Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
- Hands on experience in SQL, data management, and scripting (preferably Python).
- Strong data visualisation design skills, data modeling and inference skills.
- Hands-on and experience in managing small teams.
- Financial services experience preferred, but not mandatory.
- Strong knowledge of architectural principles, tools, frameworks, and best practices.
- Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
- Team handling preferred for 5+yrs experience candidates.
- Notice period less than 30 days.
Job brief
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Requirements
1. 2 to 4 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib
8. Experience in model building, hyper
About LodgIQ
LodgIQ is led by a team of experienced hospitality technology experts, data scientists and product domain experts. Seed funded by Highgate Ventures, a venture capital platform focused on early stage technology investments in the hospitality industry and Trilantic Capital Partners, a global private equity firm, LodgIQ has made a significant investment in advanced machine learning platforms and data science.
Title : Data Scientist
Job Description:
- Apply Data Science and Machine Learning to a REAL-LIFE problem - “Predict Guest Arrivals and Determine Best Prices for Hotels”
- Apply advanced analytics in a BIG Data Environment – AWS, MongoDB, SKLearn
- Help scale up the product in a global offering across 100+ global markets
Qualifications:
- Minimum 3 years of experience with advanced data analytic techniques, including data mining, machine learning, statistical analysis, and optimization. Student projects are acceptable.
- At least 1 year of experience with Python / Numpy / Pandas / Scipy/ MatPlotLib / Scikit-Learn
- Experience in working with massive data sets, including structured and unstructured with at least 1 prior engagement involving data gathering, data cleaning, data mining, and data visualization
- Solid grasp over optimization techniques
- Master's or PhD degree in Business Analytics. Data science, Statistics or Mathematics
- Ability to show a track record of solving large, complex problems
As a Data Engineer, your role will encompass:
- Designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture using Scala, Python, Talend etc.
- Gather and address technical and design requirements.
- Refactor existing applications to optimize its performance through setting the appropriate architecture and integrating the best practices and standards.
- Participate in the entire data life-cycle mainly focusing on coding, debugging, and testing.
- Troubleshoot and debug ETL Pipelines.
- Documentation of each process.
Technical Requirements: -
- BSc degree in Computer Science/Computer Engineering. (Masters is a plus.)
- 2+ years of experience as a Data Engineer.
- In-depth understanding of core ETL concepts, Data Modelling, Data Lineage, Data Governance, Data Catalog, etc.
- 2+ years of work experience in Scala, Python, Java.
- Good Knowledge on Big Data Tools such as Spark/HDFS/Hive/Flume, etc.
- Hands on experience on ETL tools like Talend/Informatica is a plus.
- Good knowledge in Kafka and spark streaming is a big plus.
- 2+ years of experience in using Azure cloud and its resources/services (like Azure Data factory, Azure Databricks, SQL Synapse, Azure Devops, Logic Apps, Power Bi, Azure Event Hubs, etc).
- Strong experience in Relational Databases (MySQL, SQL Server)
- Exposure on data visualization tools like Power BI / Qlik sense / MicroStrategy
- 2+ years of experience in developing APIs (REST & SOAP protocols).
- Strong knowledge in Continuous Integration & Continuous Deployment (CI/CD) utilizing Docker containers, Jenkins, etc.
- Strong competencies in algorithms and software architecture.
- Excellent analytical and teamwork skills.
Good to have: -
- Previous on-prem working experience is a plus.
- In-depth understanding of the entire web development process (design, development, and deployment)
- Previous experience in automated testing including unit testing & UI testing.
About the role
- Collaborating with a team of like-minded and experienced engineers for Tier 1 customers, you will focus on data engineering on large complex data projects. Your work will have an impact on platforms that handle crores of customers and millions of transactions daily.
- As an engineer, you will use the latest cloud services to design and develop reusable core components and frameworks to modernise data integrations in a cloud first world and own those integrations end to end working closely with business units. You will design and build for efficiency, reliability, security and scalability. As a consultant, you will help drive a data engineering culture and advocate best practices.
Mandatory experience
- 1-6 years of relevant experience
- Strong SQL skills and data literacy
- Hands-on experience designing and developing data integrations, either in ETL tools, cloud native tools or in custom software
- Proficiency in scripting and automation (e.g. PowerShell, Bash, Python)
- Experience in an enterprise data environment
- Strong communication skills
Desirable experience
- Ability to work on data architecture, data models, data migration, integration and pipelines
- Ability to work on data platform modernisation from on-premise to cloud-native
- Proficiency in data security best practices
- Stakeholder management experience
- Positive attitude with the flexibility and ability to adapt to an ever-changing technology landscape
- Desire to gain breadth and depth of technologies to support customer's vision and project objectives
What to expect if you join Servian?
- Learning & Development: We invest heavily in our consultants and offer internal training weekly (both technical and non-technical alike!) and abide by a ‘You Pass We Pay” policy.
- Career progression: We take a longer term view of every hire. We have a flat org structure and promote from within. Every hire is developed as a future leader and client adviser.
- Variety of projects: As a consultant, you will have the opportunity to work across multiple projects across our client base significantly increasing your skills and exposure in the industry.
- Great culture: Working on the latest Apple MacBook pro in our custom designed offices in the heart of leafy Jayanagar, we provide a peaceful and productive work environment close to shops, parks and metro station.
- Professional development: We invest heavily in professional development both technically, through training and guided certification pathways, and in consulting, through workshops in client engagement and communication. Growth in our organisation happens from the growth of our people.
The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes
Required Experience, Skills and Qualifications:
- Hands on experience on Big Data tools/technologies like Spark, Databricks, Map Reduce, Hive, HDFS.
- Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
- Proficiency in any of the programming language: Python/ Scala/ Java with 4+ years’ experience
- Experience in Cloud infrastructures like MS Azure, Data lake etc
- Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
Qentelli is seeking a Solution Architect to untangle and redesign a huge granny old monolithic legacy system. Interesting part is that the new system should be commissioned module by module and legacy system should phase off accordingly. So your design will have a cutting edge future state and a transition state to get there. Implementation now is all Microsoft tech stack and will continue to be on newer Microsoft tech stack. Also there is a critical component of API management to be introduced into the solution. Performance and scalability will be at the center of your solution architecture. Data modelling is one thing that is of super high importance to know.
You’ll have a distributed team with onshore in the US and offshore in India. As a Solution Architect, you should be able to wear multiple hats of working with client on solutioning and getting it implemented by engineering and infrastructure teams that are both onshore and offshore. Right candidate will be awesome at fleshing out and documenting every finer detail of the solution, elaborate at communicating with your teams, disciplined at getting it implemented and passionate for client success.
TECHNOLOGIES YOU’LL NEED TO KNOW
Greetings from Qentelli Solutions Private Limited!
We are hiring for PostgreSQL Developer
Experience: 4 to 12 years
Job Location: Hyderabad
Job Description:
- Experience in RDBMS (PostgreSQL preferred), Database Backend development, Data Modelling, Performance Tuning, exposure to NoSQL DB, Kubernetes or Cloud (AWS/Azure/GCS)
Skillset for Developer-II:
- Experience on any Big Data Tools (Nifi, Kafka, Spark, sqoop, storm, snowflake), Database Backend development, Python, No SQL DB, API Exposure, cloud or Kubernetes exposure
Skillset for API Developer:
- API Development with extensive knowledge on any RDBMS (preferred PostgreSQL), exposure to cloud or Kubernetes
We are looking for BE/BTech graduates (2018/2019 pass out) who want to build their career as Data Engineer covering technologies like Hadoop, NoSQL, RDBMS, Spark, Kafka, Hive, ETL, MDM & Data Quality. You should be willing to learn, explore, experiment, develop POCs/Solutions using these technologies with guidance and support from highly experienced Industry Leaders. You should be passionate about your work and willing to go extra mile to achieve results.
We are looking for candidates who believe in commitment and in building strong relationships. We need people who are passionate about solving problems through software and are flexible.
Required Experience, Skills and Qualifications
Passionate to learn and explore new technologies
Any RDBMS experience (SQL Server/Oracle/MySQL)
Any ETL tool experience (Informatica/Talend/Kettle/SSIS)
Understanding of Big Data technologies
Good Communication Skills
Excellent Mathematical / Logical / Reasoning Skills