Techweirdo delivers AI models & enterprise solutions, globally for mid to large-scale organizations.
We offer consultation, services, and products to holistically address the digital transformation goals of an enterprise.
We are currently hiring passionate, senior IoT cloud engineers on behalf of one of our large customers to help them find the best-fit talent and create technologically challenging, visually delightful, and easy-to-use digital products in a fast-paced environment.
Skills/ Role Requirements:
- Good hands-on experience on Azure IoT Gateway/ IoT Hub development
- Good hands-on experience to Azure functions, Azure event hub, Azure IoT Edge, and Cloud Platform Security
- Strong knowledge of C# .NET
- Industrial IoT experience is a must
- Device Communication knowledge with exposure to different protocols is an advantage
- Good communication skills as this position require consistent interaction with business stakeholders and other engineers
- Hands-on experience in optimization, architecture, building scalable real-time data pipelines is a plus
- 5 years plus relevant experience
- Surrounded by curious learners: With a Growth Mindset as our core strength, we created a learning environment with curious tech learners.
- New challenges every day: There is no ordinary day at TechWeirdo, if you like solving problems, then this is the right place for you.
- Zero micro-management, limited supervision: We encourage our team to take on challenging tasks and solve complex problems by taking ownership of their tasks. We trust our team to take calculated risks.
- Great networking: You will be connected with c-suite executives of top organizations while working with our winning team.
- Building technology how you want, when you want: We welcome people who see things differently as they are the one who has the ability to change the world.
Design and implement robust database solutions including
Security, backup and recovery
Performance, scalability, monitoring and tuning,
Data management and capacity planning,
Planning, and implementing failover between database instances.
Create data architecture strategies for each subject area of the enterprise data model.
Communicate plans, status and issues to higher management levels.
Collaborate with the business, architects and other IT organizations to plan a data strategy, sharing important information related to database concerns and constrains
Produce all project data architecture deliverables..
Create and maintain a corporate repository of all data architecture artifacts.
Understanding of data analysis, business principles, and operations
Software architecture and design Network design and implementation
Data visualization, data migration and data modelling
Relational database management systems
DBMS software, including SQL Server
Database and cloud computing design, architectures and data lakes
Information management and data processing on multiple platforms
Agile methodologies and enterprise resource planning implementation
Demonstrate database technical functionality, such as performance tuning, backup and recovery, monitoring.
Excellent skills with advanced features such as database encryption, replication, partitioning, etc.
Strong problem solving, organizational and communication skill.
- Database design, Data Warehouse, Data Mart
- Cloud Platforms - Ms Azure (Data Factory, Data Bricks, Data Pipelines etc.)
- Data manipulation/Data Engineering/ Data Profiling
- Python and SQL/PostgreSQL/SQL Server
- Apache Hadoop/HDFS, Apache Spark, Apache Sqoop
- Rest APIs and Soap APIs
- Worked on different technology paradigms such as Big Data Analytics & ETL to develop Data Warehouse and Data Marts at On-Premise as well as Cloud Platforms.
- Developed Big Data Framework to process huge amount of data through Cluster computing using Hadoop, Apache Spark and PostgreSQL tools.
- Performed data cleaning on unstructured information using various HDFS and Apache Spark Tools.
- Hands on experience on Azure Cloud – Data Factory, Data Bricks, Data Lake, Data Pipelines.
- Hands on experience on using Rest APIs and Soap APIs.
- Hands on knowledge on various data science algorithms and libraries of R, Python.
• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills
Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel
• Customer Office (Mumbai) / Remote Work
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
You’ll spend time on the following:
- You will partner with teammates to create complex data processing pipelines in order to solve our clients’ most ambitious challenges
- You will collaborate with Data Scientists in order to design scalable implementations of their models
- You will pair to write clean and iterative code based on TDD
- Leverage various continuous delivery practices to deploy data pipelines
- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
- Develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the tradeoffs of different modeling approaches
Here’s what we’re looking for:
- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
- Strong communication and client-facing skills with the ability to work in a consulting environment
Pingahla is recruiting business intelligence Consultants/Senior consultants who can help us with Information Management projects (domestic, onshore and offshore) as developers and team leads. The candidates are expected to have 3-6 years of experience with Informatica Power Center/Talend DI/Informatica Cloud and must be very proficient with Business Intelligence in general. The job is based out of our Pune office.
- Manage the customer relationship by serving as the single point of contact before, during and after engagements.
- Architect data management solutions.
- Provide technical leadership to other consultants and/or customer/partner resources.
- Design, develop, test and deploy data integration solutions in accordance with customer’s schedule.
- Supervise and mentor all intermediate and junior level team members.
- Provide regular reports to communicate status both internally and externally.
- A typical profile that would suit this position would be if the following background:
- A graduate from a reputed engineering college
- An excellent I.Q and analytical skills and should be able to grasp new concepts and learn new technologies.
- A willingness to work with a small team in a fast-growing environment.
- A good knowledge of Business Intelligence concepts
- Knowledge of Business Intelligence
- Good knowledge of at least one of the following data integration tools - Informatica Powercenter, Talend DI, Informatica Cloud
- Knowledge of SQL
- Excellent English and communication skills
- Intelligent, quick to learn new technologies
- Track record of accomplishment and effectiveness with handling customers and managing complex data management needs
Preferred Education & Experience:
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪ Data Analysis & Data Modeling
▪ Database Design & Implementation
▪ Database Performance Tuning & Optimization
▪ PL/pgSQL & SQL
5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).
5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.
Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels
Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.
Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values
Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.
About the role
- Collaborating with a team of like-minded and experienced engineers for Tier 1 customers, you will focus on data engineering on large complex data projects. Your work will have an impact on platforms that handle crores of customers and millions of transactions daily.
- As an engineer, you will use the latest cloud services to design and develop reusable core components and frameworks to modernise data integrations in a cloud first world and own those integrations end to end working closely with business units. You will design and build for efficiency, reliability, security and scalability. As a consultant, you will help drive a data engineering culture and advocate best practices.
- 1-6 years of relevant experience
- Strong SQL skills and data literacy
- Hands-on experience designing and developing data integrations, either in ETL tools, cloud native tools or in custom software
- Proficiency in scripting and automation (e.g. PowerShell, Bash, Python)
- Experience in an enterprise data environment
- Strong communication skills
- Ability to work on data architecture, data models, data migration, integration and pipelines
- Ability to work on data platform modernisation from on-premise to cloud-native
- Proficiency in data security best practices
- Stakeholder management experience
- Positive attitude with the flexibility and ability to adapt to an ever-changing technology landscape
- Desire to gain breadth and depth of technologies to support customer's vision and project objectives
What to expect if you join Servian?
- Learning & Development: We invest heavily in our consultants and offer internal training weekly (both technical and non-technical alike!) and abide by a ‘You Pass We Pay” policy.
- Career progression: We take a longer term view of every hire. We have a flat org structure and promote from within. Every hire is developed as a future leader and client adviser.
- Variety of projects: As a consultant, you will have the opportunity to work across multiple projects across our client base significantly increasing your skills and exposure in the industry.
- Great culture: Working on the latest Apple MacBook pro in our custom designed offices in the heart of leafy Jayanagar, we provide a peaceful and productive work environment close to shops, parks and metro station.
- Professional development: We invest heavily in professional development both technically, through training and guided certification pathways, and in consulting, through workshops in client engagement and communication. Growth in our organisation happens from the growth of our people.
We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.
- 5+ years deploying Machine Learning pipelines in large enterprise production systems.
- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.
Roles and Responsibilities:
Deploying ML models into production, and scaling them to serve millions of customers.
Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.
Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.
Provide software design and programming support to projects.
Qualifications & Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.
DP-100/DP-200/AZ-900 Cetification Mandatory
- 3+ years of hands-on experience in Azure Databricks platform .Delta lake, Time travel design patterns. Lambda Data Architecture
- Experience in Azure Cloud Platform. (ADLS, Blob , ADF ).
- Understanding of big data file formats like JSON, Avro, Parquet and other prevalent big data file formats
- Experience with Python, PySpark, Scala, Spark to write data pipelines and data processing layers
- Demonstrate expertise in writing complex, highly-optimized SQL queries across large data sets
- Experience with Data Governance ( Data Quality, Metadata Management, Security, etc.)
- Strong SQL skills and proven experience in working with large datasets and relational databases.
- Understanding Data Warehouse design principles (Tabular, MD).
- Experience in working with big data tools/environment to automate production pipelines
- Familiarity with Azure Synapse and Snowflake
- Experience with data visual analysis and BI tools (matplotlib and Power BI)