We are looking for BE/BTech graduates (2018/2019 pass out) who want to build their career as Data Engineer covering technologies like Hadoop, NoSQL, RDBMS, Spark, Kafka, Hive, ETL, MDM & Data Quality. You should be willing to learn, explore, experiment, develop POCs/Solutions using these technologies with guidance and support from highly experienced Industry Leaders. You should be passionate about your work and willing to go extra mile to achieve results.
We are looking for candidates who believe in commitment and in building strong relationships. We need people who are passionate about solving problems through software and are flexible.
Required Experience, Skills and Qualifications
Passionate to learn and explore new technologies
Any RDBMS experience (SQL Server/Oracle/MySQL)
Any ETL tool experience (Informatica/Talend/Kettle/SSIS)
Understanding of Big Data technologies
Good Communication Skills
Excellent Mathematical / Logical / Reasoning Skills
About Liquintel
Similar jobs
Qualifications & Experience:
▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design
▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.
▪ Expert in SQL, worked on advanced SQL for at least 2+ years
▪ Good development skills in Java, Python or other languages
▪ Experience with EMR, S3
▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview
▪ Comfortable working in an agile environment
JD Code: SHI-LDE-01
Version#: 1.0
Date of JD Creation: 27-March-2023
Position Title: Lead Data Engineer
Reporting to: Technical Director
Location: Bangalore Urban, India (on-site)
SmartHub.ai (www.smarthub.ai) is a fast-growing Startup headquartered in Palo Alto, CA, and with offices in Seattle and Bangalore. We operate at the intersection of AI, IoT & Edge Computing. With strategic investments from leaders in infrastructure & data management, SmartHub.ai is redefining the Edge IoT space. Our “Software Defined Edge” products help enterprises rapidly accelerate their Edge Infrastructure Management & Intelligence. We empower enterprises to leverage their Edge environment to increase revenue, efficiency of operations, manage safety and digital risks by using Edge and AI technologies.
SmartHub is an equal opportunity employer and will always be committed to nurture a workplace culture that supports, inspires and respects all individuals, encourages employees to bring their best selves to work, laugh and share. We seek builders who hail from a variety of backgrounds, perspectives and skills to join our team.
Summary
This role requires the candidate to translate business and product requirements to build, maintain, optimize data systems which can be relational or non-relational in nature. The candidate is expected to tune and analyse the data including from a short and long-term trend analysis and reporting, AI/ML uses cases.
We are looking for a talented technical professional with at least 8 years of proven experience in owning, architecting, designing, operating and optimising databases that are used for large scale analytics and reports.
Responsibilities
- Provide technical & architectural leadership for the next generation of product development.
- Innovate, Research & Evaluate new technologies and tools for a quality output.
- Architect, Design and Implement ensuring scalability, performance and security.
- Code and implement new algorithms to solve complex problems.
- Analyze complex data, develop, optimize and transform large data sets both structured and unstructured.
- Ability to deploy and administrator the database and continuously tuning for performance especially container orchestration stacks such as Kubernetes
- Develop analytical models and solutions Mentor Junior members technically in Architecture, Designing and robust Coding.
- Work in an Agile development environment while continuously evaluating and improvising engineering processes
Required
- At least 8 years of experience with significant depth in designing and building scalable distributed database systems for enterprise class products, experience of working in product development companies.
- Should have been feature/component lead for several complex features involving large datasets.
- Strong background in relational and non-relational database like Postgres, MongoDB, Hadoop etl.
- Deep exp database optimization, tuning ertise in SQL, Time Series Databases, Apache Drill, HDFS, Spark are good to have
- Excellent analytical and problem-solving skill sets.
- Experience in for high throughput is highly desirable
- Exposure to database provisioning in Kubernetes/non-Kubernetes environments, configuration and tuning in a highly available mode.
- Demonstrated ability to provide technical leadership and mentoring to the team
Location: Pune/Nagpur,Goa,Hyderabad/
Job Requirements:
- 9 years and above of total experience preferably in bigdata space.
- Creating spark applications using Scala to process data.
- Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
- Experience in spark job performance tuning and optimizations.
- Should have experience in processing data using Kafka/Pyhton.
- Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
- Should be proficient in writing SQL queries to process data in Data Warehouse.
- Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
- Experience on AWS services like EMR.
*Apply only if you are serving Notice Period
HIRING SQL Developers with max 20 days Of NOTICE PERIOD
Job ID: TNS2023DB01
Who Should apply?
- Only for Serious job seekers who are ready to work in night shift
- Technically Strong Candidates who are willing to take up challenging roles and want to raise their Career graph
- No DBAs & BI Developers, please
Why Think n Solutions Software?
- Exposure to the latest technology
- Opportunity to work on different platforms
- Rapid Career Growth
- Friendly Knowledge-Sharing Environment
Criteria:
- BE/MTech/MCA/MSc
- 2yrs Hands Experience in MS SQL / NOSQL
- Immediate joiners preferred/ Maximum notice period between 15 to 20 days
- Candidates will be selected based on logical/technical and scenario-based testing
- Work time - 10:00 pm to 6:00 am
Note: Candidates who have attended the interview process with TnS in the last 6 months will not be eligible.
Job Description:
- Technical Skills Desired:
- Experience in MS SQL Server, and one of these Relational DB’s, PostgreSQL / AWS Aurora DB / MySQL / any of NoSQL DBs (MongoDB / DynamoDB / DocumentDB) in an application development environment and eagerness to switch
- Design database tables, views, indexes
- Write functions and procedures for Middle Tier Development Team
- Work with any front-end developers in completing the database modules end to end (hands-on experience in the parsing of JSON & XML in Stored Procedures would be an added advantage).
- Query Optimization for performance improvement
- Design & develop SSIS Packages or any other Transformation tools for ETL
- Functional Skills Desired:
- The banking / Insurance / Retail domain would be a
- Interaction with a client a
3. Good to Have Skills:
- Knowledge in a Cloud Platform (AWS / Azure)
- Knowledge on version control system (SVN / Git)
- Exposure to Quality and Process Management
- Knowledge in Agile Methodology
- Soft skills: (additional)
- Team building (attitude to train, work along, and mentor juniors)
- Communication skills (all kinds)
- Quality consciousness
- Analytical acumen to all business requirements
- Think out-of-box for business solution
Work Timing: 5 Days A Week
Responsibilities include:
• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.
Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Preferred Education & Experience:
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪ Data Analysis & Data Modeling
▪ Database Design & Implementation
▪ Database Performance Tuning & Optimization
▪ PL/pgSQL & SQL -
5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).
-
5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.
-
Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels
-
Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.
-
Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values
-
Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
-
Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.
Skills- Informatica with Big Data Management
1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql
- Experience of providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Should have contributed to open source Big Data technologies.
- Expert-level proficiency in Python
- Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies).
- Passionate for continuous learning, experimenting, applying, and contributing towards cutting edge open source technologies and software paradigms
- Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies.
- Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib)
Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services, and the AWS CLI) - Experience working within a Linux computing environment, and use of command-line tools including knowledge of shell/Python scripting for automating common tasks
Job description
Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)
Primary Location : India-Pune, Hyderabad
Experience : 7 - 12 Years
Management Level: 7
Joining Time: Immediate Joiners are preferred
- Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
- Align architecture with business requirements and stabilizing the developed solution
- Ability to build prototypes to demonstrate the technical feasibility of your vision
- Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
- To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
- Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
- Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
- Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
- Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
- Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
- Deployment sophisticated analytics program of code using any of cloud application.
Perks and Benefits we Provide!
- Working with Highly Technical and Passionate, mission-driven people
- Subsidized Meals & Snacks
- Flexible Schedule
- Approachable leadership
- Access to various learning tools and programs
- Pet Friendly
- Certification Reimbursement Policy
- Check out more about us on our website below!
www.datametica.com
• Total of 4+ years of experience in development, architecting/designing and implementing Software solutions for enterprises.
• Must have strong programming experience in either Python or Java/J2EE.
• Minimum of 4+ year’s experience working with various Cloud platforms preferably Google Cloud Platform.
• Experience in Architecting and Designing solutions leveraging Google Cloud products such as Cloud BigQuery, Cloud DataFlow, Cloud Pub/Sub, Cloud BigTable and Tensorflow will be highly preferred.
• Presentation skills with a high degree of comfort speaking with management and developers
• The ability to work in a fast-paced, work environment
• Excellent communication, listening, and influencing skills
RESPONSIBILITIES:
• Lead teams to implement and deliver software solutions for Enterprises by understanding their requirements.
• Communicate efficiently and document the Architectural/Design decisions to customer stakeholders/subject matter experts.
• Opportunity to learn new products quickly and rapidly comprehend new technical areas – technical/functional and apply detailed and critical thinking to customer solutions.
• Implementing and optimizing cloud solutions for customers.
• Migration of Workloads from on-prem/other public clouds to Google Cloud Platform.
• Provide solutions to team members for complex scenarios.
• Promote good design and programming practices with various teams and subject matter experts.
• Ability to work on any product on the Google cloud platform.
• Must be hands-on and be able to write code as required.
• Ability to lead junior engineers and conduct code reviews
QUALIFICATION:
• Minimum B.Tech/B.E Engineering graduate