Company - Tekclan Software Solutions
Position – SQL Developer
Experience – Minimum 4+ years of experience in MS SQL server, SQL Programming, ETL development.
Location - Chennai
We are seeking a highly skilled SQL Developer with expertise in MS SQL Server, SSRS, SQL programming, writing stored procedures, and proficiency in ETL using SSIS. The ideal candidate will have a strong understanding of database concepts, query optimization, and data modeling.
Responsibilities:
1. Develop, optimize, and maintain SQL queries, stored procedures, and functions for efficient data retrieval and manipulation.
2. Design and implement ETL processes using SSIS for data extraction, transformation, and loading from various sources.
3. Collaborate with cross-functional teams to gather business requirements and translate them into technical specifications.
4. Create and maintain data models, ensuring data integrity, normalization, and performance.
5. Generate insightful reports and dashboards using SSRS to facilitate data-driven decision making.
6. Troubleshoot and resolve database performance issues, bottlenecks, and data inconsistencies.
7. Conduct thorough testing and debugging of SQL code to ensure accuracy and reliability.
8. Stay up-to-date with emerging trends and advancements in SQL technologies and provide recommendations for improvement.
9. Should be an independent and individual contributor.
Requirements:
1. Minimum of 4+ years of experience in MS SQL server, SQL Programming, ETL development.
2. Proven experience as a SQL Developer with a strong focus on MS SQL Server.
3. Proficiency in SQL programming, including writing complex queries, stored procedures, and functions.
4. In-depth knowledge of ETL processes and hands-on experience with SSIS.
5. Strong expertise in creating reports and dashboards using SSRS.
6. Familiarity with database design principles, query optimization, and data modeling.
7. Experience with performance tuning and troubleshooting SQL-related issues.
8. Excellent problem-solving skills and attention to detail.
9. Strong communication and collaboration abilities.
10. Ability to work independently and handle multiple tasks simultaneously.
Preferred Skills:
1. Certification in MS SQL Server or related technologies.
2. Knowledge of other database systems such as Oracle or MySQL.
3. Familiarity with data warehousing concepts and tools.
4. Experience with version control systems.
Similar jobs
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Data Engineer JD:
- Designing, developing, constructing, installing, testing and maintaining the complete data management & processing systems.
- Building highly scalable, robust, fault-tolerant, & secure user data platform adhering to data protection laws.
- Taking care of the complete ETL (Extract, Transform & Load) process.
- Ensuring architecture is planned in such a way that it meets all the business requirements.
- Exploring new ways of using existing data, to provide more insights out of it.
- Proposing ways to improve data quality, reliability & efficiency of the whole system.
- Creating data models to reduce system complexity and hence increase efficiency & reduce cost.
- Introducing new data management tools & technologies into the existing system to make it more efficient.
- Setting up monitoring and alarming on data pipeline jobs to detect failures and anomalies
What do we expect from you?
- BS/MS in Computer Science or equivalent experience
- 5 years of recent experience in Big Data Engineering.
- Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Zookeeper, Storm, Spark, Airflow and NoSQL systems
- Excellent programming and debugging skills in Java or Python.
- Apache spark, python, hands on experience in deploying ML models
- Has worked on streaming and realtime pipelines
- Experience with Apache Kafka or has worked with any of Spark Streaming, Flume or Storm
Focus Area:
R1 |
Data structure & Algorithms |
R2 |
Problem solving + Coding |
R3 |
Design (LLD) |
Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix
Data engineering services required:
- Build data products and processes alongside the core engineering and technology team;
- Collaborate with senior data scientists to curate, wrangle, and prepare datafor use in their advanced analytical models;
- Integrate datafrom a variety of sources, assuring that they adhere to data quality and accessibility standards;
- Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines;
- Use Hadoop architecture and HDFS commands to design and optimize data queries at scale;
- Evaluate and experiment with novel data engineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases.
Big data engineering skills:
- Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes of data from multiple sources and systems into enterprise analytics platforms;
- Proven ability to design and optimize queries to build scalable, modular, efficient data pipelines;
- Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets;
- Proven experience delivering production-ready data engineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance;
- Ability to operate with a variety of data engineering tools and technologies
Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.
Mandatory Skills:
- Strong in SQL development
- Hands-on at least one scripting language - preferably shell scripting
- Development experience in Data warehouse projects
Opportunities:
- Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
- Experience implementing large-scale ETL processes using Informatica PowerCenter.
- Design high-level ETL process and data flow from the source system to target databases.
- Strong experience with Oracle databases and strong SQL.
- Develop & unit test Informatica ETL processes for optimal performance utilizing best practices.
- Performance tune Informatica ETL mappings and report queries.
- Develop database objects like Stored Procedures, Functions, Packages, and Triggers using SQL and PL/SQL.
- Hands-on Experience in Unix.
- Experience in Informatica Cloud (IICS).
- Work with appropriate leads and review high-level ETL design, source to target data mapping document, and be the point of contact for any ETL-related questions.
- Good understanding of project life cycle, especially tasks within the ETL phase.
- Ability to work independently and multi-task to meet critical deadlines in a rapidly changing environment.
- Excellent communication and presentation skills.
- Effectively worked on the Onsite and Offshore work model.
JOB DESCRIPTION
- 2 to 6 years of experience in imparting technical training/ mentoring
- Must have very strong concepts of Data Analytics
- Must have hands-on and training experience on Python, Advanced Python, R programming, SAS and machine learning
- Must have good knowledge of SQL and Advanced SQL
- Should have basic knowledge of Statistics
- Should be good in Operating systems GNU/Linux, Network fundamentals,
- Must have knowledge on MS office (Excel/ Word/ PowerPoint)
- Self-Motivated and passionate about technology
- Excellent analytical and logical skills and team player
- Must have exceptional Communication Skills/ Presentation Skills
- Good Aptitude skills is preferred
- Exceptional communication skills
Responsibilities:
- Ability to quickly learn any new technology and impart the same to other employees
- Ability to resolve all technical queries of students
- Conduct training sessions and drive the placement driven quality in the training
- Must be able to work independently without the supervision of a senior person
- Participate in reviews/ meetings
Qualification:
- UG: Any Graduate in IT/Computer Science, B.Tech/B.E. – IT/ Computers
- PG: MCA/MS/MSC – Computer Science
- Any Graduate/ Post graduate, provided they are certified in similar courses
ABOUT EDUBRIDGE
EduBridge is an Equal Opportunity employer and we believe in building a meritorious culture where everyone is recognized for their skills and contribution.
Launched in 2009 EduBridge Learning is a workforce development and skilling organization with 50+ training academies in 18 States pan India. The organization has been providing skilled manpower to corporates for over 10 years and is a leader in its space. We have trained over a lakh semi urban & economically underprivileged youth on relevant life skills and industry-specific skills and provided placements in over 500 companies. Our latest product E-ON is committed to complementing our training delivery with an Online training platform, enabling the students to learn anywhere and anytime.
To know more about EduBridge please visit: http://www.edubridgeindia.com/
You can also visit us on Facebook , LinkedIn for our latest initiatives and products
- Creating, designing and developing data models
- Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
- Validating results and creating business reports
- Monitoring and tuning data loads and queries
- Develop and prepare a schedule for a new data warehouse
- Analyze large databases and recommend appropriate optimization for the same
- Administer all requirements and design various functional specifications for data
- Provide support to the Software Development Life cycle
- Prepare various code designs and ensure efficient implementation of the same
- Evaluate all codes and ensure the quality of all project deliverables
- Monitor data warehouse work and provide subject matter expertise
- Hands-on BI practices, data structures, data modeling, SQL skills
- Minimum 1 year experience in Pyspark
About WheelsEye :
Logistics in India is a complex business - layered with multiple stakeholders, unorganized, primarily offline, and with many trivial yet deep-rooted problems. Though this industry contributes 14% to the GDP, its problems have gone unattended and ignored, until now.
WheelsEye is a logistics company, building a digital infrastructure around fleet owners. Currently, we offer solutions to empower truck fleet owners. Our proprietary software & hardware solutions help automate operations, secure fleet, save costs, improve on-time performance, and streamline their business.
Why WheelsEye?
- Work on a real Indian problem of scale impact lives of 5.5 cr fleet owners, drivers and their families in a meaningful way
- Different from current market players, heavily focused and built around truck owners Problem solving and learning-oriented organization
- Audacious goals, high speed, and action orientation
- Opportunity to scale the organization across the country
- Opportunity to build and execute the culture
- Contribute to and become a part of the action plan for building the tech, finance, and service infrastructure for the logistics industry It's Tough!
Requirements:
- Bachelor’s degree with additional 2-5 years experience in analytics domain
- Experience in articulating and translating business questions and using statistical techniques to arrive at an answer using available data
- Proficient with scripting and/or programming language, e.g. Python, R(Optional), Advanced SQL; advanced knowledge of data processing, database programming and data analytics tools and techniques
- Extensive background in data mining, modelling and statistical analysis; able to understand various data structures and common methods in data transformation e.g. Linear and logistic regression, clustering, decision trees etc.
- Working knowledge of tools like Mixpanel, Metabase, Google sheets, Google BigQuery & Data studio is preferred
- Ability to self-start and self-directed work in a fast-paced environment
If you are willing to work on solving real world problems for truck owners, Join us!