
Codalyze Technologies
https://codalyze.comJobs at Codalyze Technologies
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.
Responsibilities and Duties :
- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.
- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions
Education level :
- Bachelor's degree in Computer Science or equivalent
Experience :
- Minimum 3+ years relevant experience working on production grade projects experience in hands on, end to end software development
- Expertise in application, data and infrastructure architecture disciplines
- Expert designing data integrations using ETL and other data integration patterns
- Advanced knowledge of architecture, design and business processes
Proficiency in :
- Modern programming languages like Java, Python, Scala
- Big Data technologies Hadoop, Spark, HIVE, Kafka
- Writing decently optimized SQL queries
- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)
- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions
- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.
- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.
- Experience generating physical data models and the associated DDL from logical data models.
- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.
- Experience enforcing data modeling standards and procedures.
- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Skills :
Must Know :
- Core big-data concepts
- Spark - PySpark/Scala
- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)
- Handling of various file formats
- Cloud platform - AWS/Azure/GCP
- Orchestration tool - Airflow
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.
Responsibilities and Duties :
- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.
- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions
Education level :
- Bachelor's degree in Computer Science or equivalent
Experience :
- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development
- Expertise in application, data and infrastructure architecture disciplines
- Expert designing data integrations using ETL and other data integration patterns
- Advanced knowledge of architecture, design and business processes
Proficiency in :
- Modern programming languages like Java, Python, Scala
- Big Data technologies Hadoop, Spark, HIVE, Kafka
- Writing decently optimized SQL queries
- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)
- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions
- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.
- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.
- Experience generating physical data models and the associated DDL from logical data models.
- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.
- Experience enforcing data modeling standards and procedures.
- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Skills :
Must Know :
- Core big-data concepts
- Spark - PySpark/Scala
- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)
- Handling of various file formats
- Cloud platform - AWS/Azure/GCP
- Orchestration tool - Airflow
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Similar companies
About the company
Jobs
2
About the company
Baker Street Fintech (Product Name: Cambridge Wealth) is a Financial Products Company. We help build world-class Fintech Products for our Clients who want to manage their wealth on our platform. Founded by professionals with Experiences spanning from PwC UK to Banking and Technology firms, we are a financially stable, profitable company growing quickly!
Jobs
3
About the company
Jobs
0
About the company
About Pendo
Pendo is a leading product experience and software analytics platform that helps companies understand how users interact with their software and improve those experiences. It operates in the product analytics and digital adoption space, enabling organizations to combine analytics, in-app guidance, and user feedback in one unified platform.
Pendo – Key Highlights
- Founded in 2013, headquartered in Raleigh, North Carolina
- Serves 14,000+ companies globally
- Processes 20B+ daily events and supports 1B+ users
- 850+ employees across global offices
- Raised $350M+ total funding from investors like General Atlantic, Tiger Global, and Sapphire Ventures
Chisel was acquired by Pendo in 2026, marking a key milestone in its journey. The acquisition strengthens Pendo’s push into AI-driven product experience, with Chisel’s agentic capabilities becoming a core part of Pendo’s broader platform vision.
Chisel Labs is an AI-powered product management platform built to help product teams move faster and make better decisions. It operates in the product management and AI SaaS space, bringing feedback, roadmapping, and documentation into a unified system of record.
At its core, Chisel functions as an AI PM Agent, automating workflows like PRDs, research, and feedback analysis - allowing teams to focus on strategy, prioritization, and product outcomes.
About Chisel
Chisel is a lean, globally distributed team with presence across the US and India. The team operates at the intersection of AI, product management, and enterprise SaaS, with a strong emphasis on ownership, speed, and building for real-world product teams at scale. Post-acquisition, the team is now part of Pendo’s broader organization.
🏆 Milestones
- Founded in the early 2020s as a next-gen product management platform
- Built one of the early AI-native PM agents for automating product workflows
- Grew adoption across global teams with integrations like Jira, Salesforce, and Zendesk
- Achieved strong product recognition across PM tooling ecosystems
- Acquired by Pendo (2026) to accelerate AI innovation in product experience
Jobs
2
About the company
Optimo Capital is a newly established NBFC with a mission to serve the underserved MSME businesses with their credit needs in India. Less than 15% of MSMEs have access to formal credit. We aim to bridge this credit gap by employing a phygital model (physical branches + digital decision-making).
Being a technology and data-first company, tech and data enthusiasts play a crucial role in building the tech & infra to help the company thrive.
Jobs
1
About the company
Jobs
15
About the company
Your Go-To AI Consultancy For AI Research, AI Products, AI Solutions, AI MVP Design, Idea Validation
Jobs
15
About the company
Blitzy is a Boston, MA based Generative AI Start-up with an established office in Pune, India. We are on a mission to automate custom software creation to unlock the next industrial revolution. We're backed by multiple tier 1 investors, have success as founders at the last start-up, and dozens of Generative AI patents to our names.
Our Culture
Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.
Funding Journey
In September 2024, Blitzy secured $4.4M in seed funding from prominent investors including Link Ventures, Asymmetric Capital Partners, Flybridge, and four other strategic investors, demonstrating strong market confidence in their autonomous software development platform.
Our Values
- We move Blitzy Fast: Time is both our company's and our client's most precious asset. We move fast and fearlessly to innovate internally and deliver exceptional software externally to our clients.
- We have a Championship Mindset: We operate like a professional sports team. We win as a team by holding ourselves and each other to high standards, collaborating in-person, and remaining focused on the mission.
- We have a Passion for Invention: We are inventors at heart. We value starting with best practices and open source, but we are pushing the frontier of what is possible.
- We Work for the Customer: We focus on delivering outsized value to the customers we work with and expanding those relationships to deep, meaningful partnerships.
What We Ask of Candidates
Please ask yourself if you are ready for a challenge before applying. Even in optimal conditions, Start-Ups are hard, and are always a lot of work. What you do week to week will change. If this feels exciting, not concerning, that's a good sign.
Jobs
2




