Who are we?
We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.
What we are looking for
We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.
What you’ll be doing
You’ll be working on the data architecture, building ETL pipelines and on a cloud migration strategy. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews
You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases where possible, development, deployment and fixes. You will own the entire stack and take complete ownership of the solution. And, most importantly, you’ll be making a pledge that you’ll never stop learning!
Skills you need in order to succeed in this role
Most Important: Integrity of character, diligence, and the commitment to do your best
Must-Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development
Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing, Azure Functions, Azure App Insights
Self-Learner: You must be extremely hands-on and obsessive about delivering clean code
- Sense of Ownership: Do whatever it takes to meet development timelines
- Wide range of experience including on-prem and cloud technologies.
- Can analyze a problem and recommend solutions.
- Clear communication to the team and clients
- Experience in creating end-to-end data pipeline
- Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
- Working experience in Databricks
- Strong in BI/DW/Data lake Architecture, design, and ETL
- Strong in Requirement Analysis, Data Analysis, and Data Modeling capabilities
- Experience in object-oriented programming, data structures, algorithms, and software engineering
- Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
- Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
- Working knowledge of server configuration/deployment
- Experience using source control and bug tracking systems,
writing user stories and technical documentation
- Strong in Requirement Analysis, Data Analysis, and Data Modeling capabilities
- Expertise in creating tables, procedures, functions, triggers, indexes, views, joins, and optimization of complex
- Experience with database versioning, backups, restores and
- Expertise in data security and
- Ability to perform database performance tuning queries
Who we are
We are Software Craftspeople. We are proud of the way we work and the code we write. We embrace and are evangelists of eXtreme Programming practices. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus own quality. And most importantly, we never stop learning!
We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. We work with large established institutions to help them create internal applications to automate manual opperations and achieve scale.
We design software, design the team a well as the organizational strategy required to successfully release robust and scalable products. Incubyte strives to find people who are passionate about coding, learning and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim to bringing a product mindset into services. More on our website: https://incubyte.co" target="_blank">https://incubyte.co
Join our team! We’re always looking for like minded people!
- 5+ years of professional experience in developing data-pipelines for large-scale, complex datasets from varieties of data sources.
- Data Engineering expertise with strong experience working with Big data technologies such as Hadoop, Hive, Spark, Scala, Python etc.
- Experience working with Cloud based data technologies such as Azure Data lake, Azure Data factory, Azure Data Bricks highly desirable.
- Knowledge and experience working with database systems such as Cassandra, HBase, Cosmos etc.
- Moderate coding skills. SQL or similar required. C# or other languages strongly preferred.
- Proven track record of designing and delivering large-scale, high quality systems and software products.
- Outstanding communication and collaboration skills. You can learn from and teach others.
- Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services.
- Experience with prediction in adversarial (energy) environments highly desirable.
- A Bachelor or Master’s degree in computer science or Engineering with coursework in
- Statistics, Data Science, Experimentation Design, and Machine Learning highly desirable. Education:
- Engineering graduate or higher from Tier I or Tier II colleges
|Job Title: Data Engineer|
|Tech Job Family: DACI|
|• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)|
|• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering|
|• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)|
|• Master's Degree in Computer Science, CIS, or related field|
|• 2 years of IT experience developing and implementing business systems within an organization|
|• 4 years of experience working with defect or incident tracking software|
|• 4 years of experience with technical documentation in a software development environment|
|• 2 years of experience working with an IT Infrastructure Library (ITIL) framework|
|• 2 years of experience leading teams, with or without direct reports|
|• Experience with application and integration middleware|
|• Experience with database technologies|
|• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)|
|• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)|
|• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)|
|• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)|
|• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)|
|Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.|
- Proven work experience as an Enterprise / Data / Analytics Architect - Data Platform in HANA XSA, XS, Data Intelligence and SDI
- Can work on new and existing architecture decision in HANA XSA, XS, Data Intelligence and SDI
- Well versed with data architecture principles, software / web application design, API design, UI / UX capabilities, XSA / Cloud foundry architecture
- In-depth understand of database structure (HANA in-memory) principles.
- In-depth understand of ETL solutions and data integration strategy.
- Excellent knowledge of Software and Application design, API, XSA, and microservices concepts
Roles & Responsibilities:
- Advise and ensure compliance of the defined Data Architecture principle.
- Identifies new technologies update and development tools including new release/upgrade/patch as required.
- Analyzes technical risks and advises on risk mitigation strategy.
- Advise and ensures compliance to existing and development required data and reporting standard including naming convention.
The time window is ideally AEST (8 am till 5 pm) which means starting at 3:30 am IST. We understand it can be very early for an SME supporting from India. Hence, we can consider the candidates who can support from at least 7 am IST (earlier is possible).
Working with a multitude of clients populating the FTSE and Fortune 500s, Audit Partnership is a people focused organization with a strong belief in our employees. We hire the best people to provide the best services to our clients.
APL offers profit recovery services to organizations of all sizes across a number of sectors. APL was borne out of a desire to offer an alternative from the stagnant service provision on offer in the profit recovery industry.
Every year we cover million of pounds for our clients and also work closely with them, sharing our audit findings to minimize future losses. Our dedicated and highly experienced audit teams utilize progressive & dynamic financial service solutions & industry leading technology to achieve maximum success.
We provide dynamic work environments focused on delivering data-driven solutions at a rapidly increased pace over traditional development. Be a part of our passionate and motivated team who are excited to use the latest in software technologies within financial services.
Headquartered in the UK, we have expanded from a small team in 2002 to a market leading organization serving clients across the globe while keeping our clients at the heart of all decisions we make.
We are looking for a high-potential, enthusiastic SQL Data Engineer with a strong desire to build a career in data analysis, database design and application solutions. Reporting directly to our UK based Technology team, you will provide support to our global operation in the delivery of data analysis, conversion, and application development to our core audit functions.
Duties will include assisting with data loads, using T-SQL to analyse data, front-end code changes, data housekeeping, data administration, and supporting the Data Services team as a whole. Your contribution will grow in line with your experience and skills, becoming increasingly involved in the core service functions and client delivery. A self-starter with a deep commitment to the highest standards of quality and customer service. We are offering a fantastic career opportunity, working for a leading international financial services organisation, serving the world’s largest organisations.
What we are looking for:
- 1-2 years of previous experience in a similar role
- Data analysis and conversion skills using Microsoft SQL Server is essential
- An understanding of relational database design and build
- Schema design, normalising data, indexing, query performance analysis
- Ability to analyse complex data to identify patterns and detect anomalies
- Assisting with ETL design and implementation projects
- Knowledge or experience in one or more of the key technologies below would be preferable:
- Microsoft SQL Server (SQL Server Management Studio, Stored Procedure writing etc)
- Programming languages (C#, VB, Python etc)
- Use of Python to manipulate and import data
- Experience of ETL/automation advantageous but not essential (SSIS/Prefect/Azure)
- A self-starter who can drive projects with minimal guidance
- Meeting stakeholders to agree system requirements
- Someone who is enthusiastic and eager to learn
- Very good command of English and excellent communication skills
Perks & Benefits:
- A fantastic work life balance
- Competitive compensation and benefits
- Exposure of working with Fortune 500 organization
- Expert guidance and nurture from global leaders
- Opportunities for career and personal advancement with our continued global growth strategy
- Industry leading training programs
- A working environment that is exciting, fun and engaging
- Hands-on experience in any Cloud Platform
- Microsoft Azure Experience
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
- Strong Python Coding skills and OOP skills
- Should have worked on Big Data product Architecture
- Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
- NoSQL-based databases such as Cassandra, Elasticsearch etc.
- Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
- Experience on development of ETL for data product
- Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
- Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
- Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
- Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
- Python and Scala (Optional), Spark / PySpark, Parallel programming