Bidgely is seeking an outstanding and deeply technical Principal Engineer / Sr. Principal Engineer / Architect to lead the architecture and evolution of our next-generation data and platform infrastructure. This is a senior IC role for someone who loves solving complex problems at scale, thrives in high-ownership environments, and influences engineering direction across teams.
You will be instrumental in designing scalable and resilient platform components that can handle trillions of data points, integrate machine learning pipelines, and support advanced energy analytics. As we evolve our systems for the future of clean energy, you will play a critical role in shaping the platform that powers all Bidgely products.
Responsibilities
- Architect & Design: Lead the end-to-end architecture of core platform components – from ingestion pipelines to ML orchestration and serving layers. Architect for scale (200Bn+ daily data points), performance, and flexibility.
- Technical Leadership: Act as a thought leader and trusted advisor for engineering teams. Review designs, guide critical decisions, and set high standards for software engineering excellence.
- Platform Evolution: Define and evolve the platform’s vision, making key choices in data processing, storage, orchestration, and cloud-native patterns.
- Mentorship: Coach senior engineers and staff on architecture, engineering best practices, and system thinking. Foster a culture of engineering excellence and continuous improvement.
- Innovation & Research: Evaluate and experiment with emerging technologies (e.g., event-driven architectures, AI infrastructure, new cloud-native tools) to stay ahead of the curve.
- Cross-functional Collaboration: Partner with Engineering Managers, Product Managers, and Data Scientists to align platform capabilities with product needs.
- Non-functional Leadership: Ensure systems are secure, observable, resilient, performant, and cost-efficient. Drive excellence in areas like compliance, DevSecOps, and cloud cost optimization.
- GenAI Integration: Explore and drive adoption of Generative AI to enhance developer productivity, platform intelligence, and automation of repetitive engineering tasks.
Requirements:
- 8+ years of experience in backend/platform architecture roles, ideally with experience at scale.
- Deep expertise in distributed systems, data engineering stacks (Kafka, Spark, HDFS, NoSQL DBs like Cassandra/ElasticSearch), and cloud-native infrastructure (AWS, GCP, or Azure).
- Proven ability to architect high-throughput, low-latency systems with batch + real-time processing.
- Experience designing and implementing DAG-based data processing and orchestration systems.
- Proficient in Java (Spring Boot, REST), and comfortable with infrastructure-as-code and CI/CD practices.
- Strong understanding of non-functional areas: security, scalability, observability, and
- compliance.
- Exceptional problem-solving skills and a data-driven approach to decision-making.
- Excellent communication and collaboration skills with the ability to influence at all levels.
- Prior experience working in a SaaS environment is a strong plus.
- Experience with GenAI tools or frameworks (e.g., LLMs, embedding models, prompt engineering, RAG, Copilot-like integrations) to accelerate engineering workflows or enhance platform intelligence is highly desirable.

About Bidgely
About
Bidgely is an energy-intelligence company founded in 2011. They specialise in turning smart-meter and utility data into actionable insights via their platform UtilityAI™. Their goal: help utilities (electric, gas, water) personalise customer engagement, modernise grids, enable EV charging, distributed energy resources—all while lowering operating cost and improving efficiency. They work globally (North America, Europe, Asia Pacific) and are in a growth/scale-up phase.
Bidgely holds 17+ patents and has raised over $50M in funding. Our team of 30+ data scientists and 150+ engineers brings a shared passion for using AI to revolutionize how the world consumes energy.
As energy usage evolves, our platform is scaling fast — growing from 700 billion to trillions of data points processed. Headquartered in Silicon Valley with hubs in India and Europe, Bidgely blends the agility of a scale-up with the impact of a global clean-energy leader.
Our India Development Center is the heart of innovation, driving data science, technology, and product delivery for customers worldwide.
Candid answers by the company
They provide an AI-powered SaaS platform (UtilityAI™) that takes meter-data (AMI or non-AMI) and disaggregates usage to the appliance level for each household, enabling utilities to personalise engagement and optimise grid operations.
Connect with the team
Similar jobs
JOB DESCRIPTION
Experience: 5-8 years
Location: Mumbai
Wissen Technology is now hiring for a Senior Java Developer - Bangalore with hands-on experience in Core Java, algorithms, data structures, multithreading and SQL. We are solving complex technical problems in the industry and need talented software engineers to join our mission and be a part of a global software development team. A brilliant opportunity to become a part of a highly motivated and expert team which has made a mark as a high-end technical consulting.
Required Skills:
- Exp. - 5-8 years
- Experience in Core Java and Spring Boot.
- Extensive experience in developing enterprise-scale applications and systems. Should possess good architectural knowledge and be aware of enterprise application design patterns.
- Should have the ability to analyze, design, develop and test complex, low-latency client- facing applications.
- Good development experience with RDBMS.
- Good knowledge of multi-threading and high-performance server-side development.
- Basic working knowledge of Unix/Linux.
- Excellent problem solving and coding skills.
- Strong interpersonal, communication and analytical skills.
- Should have the ability to express their design ideas and thoughts.
About Us:
At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide.
Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality.
We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more.
Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Whether it’s AI/ML for unstructured data processing, cloud enablement, or data engineering, Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our proprietary Interview Ninja platform ensures we hire the best, building high-performing teams that deliver unmatched results.
Today, Wissen Technology has a global footprint with 2200+ employees across offices in the US, UK, UAE, India, and Australia. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Website : www.wissen.com
Python Developer - AI/MLYour Responsibilities
- Develop, train, and optimize ML models using PyTorch, TensorFlow, and Keras.
- Build end-to-end LLM and RAG pipelines using LangChain and LangGraph.
- Work with LLM APIs (OpenAI, Anthropic Claude, Azure OpenAI) and implement prompt engineering strategies.
- Utilize Hugging Face Transformers for model fine-tuning and deployment.
- Integrate embedding models for semantic search and retrieval systems.
- Work with transformer-based architectures (BERT, GPT, LLaMA, Mistral) for production use cases.
- Implement LLM evaluation frameworks (RAGAS, LangSmith) and performance optimization.
- Design and maintain Python microservices using FastAPI with REST/GraphQL APIs.
- Implement real-time communication with FastAPI WebSockets.
- Implement pgvector for embedding storage and similarity search with efficient indexing strategies.
- Integrate vector databases (pgvector, Pinecone, Weaviate, FAISS, Milvus) for retrieval pipelines.
- Containerize AI services with Docker and deploy on Kubernetes (EKS/GKE/AKS).
- Configure AWS infrastructure (EC2, S3, RDS, SageMaker, Lambda, CloudWatch) for AI/ML workloads.
- Version ML experiments using MLflow, Weights & Biases, or Neptune.
- Deploy models using serving frameworks (TorchServe, BentoML, TensorFlow Serving).
- Implement model monitoring, drift detection, and automated retraining pipelines.
- Build CI/CD pipelines for automated testing and deployment with ≥80% test coverage (pytest).
- Follow security best practices for AI systems (prompt injection prevention, data privacy, API key management).
- Participate in code reviews, tech talks, and AI learning sessions.
- Follow Agile/Scrum methodologies and Git best practices.
Required Qualifications
- Bachelor's or Master's degree in Computer Science, AI/ML, or related field.
- 2–5 years of Python development experience (Python 3.9+) with strong AI/ML background.
- Hands-on experience with LangChain and LangGraph for building LLM-powered workflows and RAG systems.
- Deep learning experience with PyTorch or TensorFlow.
- Experience with Hugging Face Transformers and model fine-tuning.
- Proficiency with LLM APIs (OpenAI, Anthropic, Azure OpenAI) and prompt engineering.
- Strong experience with FastAPI frameworks.
- Proficiency in PostgreSQL with pgvector extension for embedding storage and similarity search.
- Experience with vector databases (pgvector, Pinecone, Weaviate, FAISS, or Milvus).
- Experience with model versioning tools (MLflow, Weights & Biases, or Neptune).
- Hands-on with Docker, Kubernetes basics, and AWS cloud services.
- Skilled in Git workflows, automated testing (pytest), and CI/CD practices.
- Understanding of security principles for AI systems.
- Excellent communication and analytical thinking.
Nice to Have
- Experience with multiple vector databases (Pinecone, Weaviate, FAISS, Milvus).
- Knowledge of advanced LLM fine-tuning (LoRA, QLoRA, PEFT) and RLHF.
- Experience with model serving frameworks and distributed training.
- Familiarity with workflow orchestration tools (Airflow, Prefect, Dagster).
- Knowledge of quantization and model compression techniques.
- Experience with infrastructure as code (Terraform, CloudFormation).
- Familiarity with data versioning tools (DVC) and AutoML.
- Experience with Streamlit or Gradio for ML demos.
- Background in statistics, optimization, or applied mathematics.
- Contributions to AI/ML or LangChain/LangGraph open-source projects.
About us:
TAPPP(https://tappp.com/) is building the next-generation digital platform by leveraging cell-based architecture to integrate technologies like Artificial Intelligence, Rules, Workflows, Microservices, FaaS (Function as a Service), Micro-frontends, and Micro apps to create a highly extensible and cutting-edge technology platform that brings together sports fans with broadcasters, sports teams, and sportsbooks to create a marketplace for all aspects of sports and we are available across platforms via the Web, Mobile, Roku, and Tablets.
Building out this brand presents significant product and engineering challenges. At the center of solving those challenges is the TAPPP Product Engineering team which is responsible for the TAPPP product end to end.
TAPPP is led by a very able leadership team drawn from Industry leaders from companies like ESPN, Amazon, Blackhawk, Kargocard, Visa, and
many others.
The organization is flat, processes are minimal, individual responsibility is big, and there is an emphasis on keeping non-productive influences out of the everyday technical decision-making process. Upholding these philosophies will be imperative as we execute our aggressive plan of global expansion over the next 2 years.
Who are we looking for:
A coding enthusiast who loves writing elegant code and developing software systems.
As a senior java developer, you will be a part of the core product development team that is responsible for building high-performant components of the TAPPP platform.
Your responsibility:
- You will be responsible for designing, coding, reviewing, testing, and bug-fixing different modules of the software product that needs to work seamlessly across different environments.
- Write production-quality code in Java, J2EE, and Spring
- You will work in an agile team, working on the TAPPP revolutionary platform. You‘ll be using cutting-edge solutions (Spring Boot, Docker, Kafka, Redis, Continuous Delivery) for creating and maintaining high-load distributed services that are part of our messaging platform.
Mandatory technical skills:
- Hands-on experience with
- Java 1.7+
- RDBMS (MySQL/PostgreSQL)
- JPA (Hibernate or any other ORM framework)
- Spring Boot, Spring MVC, Spring Security
- Hands-on experience in writing extensible RESTful APIs
- Hands-on in Java development (all facets of development) with a sound understanding of OOAD.
- Should have excellent debugging, code review, and design review skills
- Should have a sound understanding of a Microservice based architecture
Good to have technical skills:
Kafka
GraphQL
Redis
AWS (ECS, Cloudwatch)
Other
- Strong independent contributor
- Comfortable working in a start-up environment
The position is based in Pune, India.
Good to have Winform experience
Looking for Immediate joiners.
Job Description
Title - Lead Snowflake Developer
Location - Chennai/Hyderabad/Bangalore
Role - Fulltime
Notice Period/Availability - Immediate
Years of Experience - 6+
Job Description:
- Overall 6 years of experience in IT/Software development
- Minimum 3 years of experience working with Snowflake.
- Designing, implementing and testing cloud computing solutions using Snowflake technology.
- Creating, monitoring and optimization of ETL/ELT processes.
- Migrating solutions from on-premises to public cloud platforms.
- Experience in SQL language and data warehousing concepts.
- Experience in Cloud technologies: AWS, Azure or GCP.
InnovationM is looking for a Java Developer with experience in Spring Boot, Microservices, and MongoDB to join our team. The ideal candidate should have hands-on experience in designing and developing REST APIs using Spring Boot, building microservices-based applications, working with MongoDB and have experience in deploying applications in AWS.
What you must be good at:
● Develop and maintain REST APIs using Spring Boot framework
● Build microservices-based applications using Spring Boot and related frameworks
● Design and develop data models and queries for MongoDB database
● Deploy applications in AWS using EC2, ECR, and other relevant AWS services
● Work in an Agile development environment
● Collaborate with cross-functional teams to deliver high-quality software
What you must be good at:
● 4+ years of experience in Java development
● Strong experience in Spring Boot and related frameworks
● Hands-on experience in building microservices-based applications
● Proficient in designing and developing REST APIs
● Strong experience in MongoDB, including data modeling and query optimization
● Good understanding of AWS services and deployment methodologies
● Experience working in an Agile development environment
● Proficient in Git and version control systems
- A minimum of 3 years experience as a Technical Lead in Java., Strong experience in design and implementation of high performing and highly scalable websites/applications
- Strong experience in Spring MVC framework and experience in design patterns, unit testing etc..
- Strong experience in AWS services such as EC2, Lambda, S3, ECS, API Gateway and using tools such as Terraform, Ansible
- Strong experience in Endeca, ATG and BCC, Solr, any Headless CMS
- Considerable experience in databases such as SQL Server/Mongo DB is an added advantage
- Good experience in JavaScript frameworks such as Angular/React/VueJS is an added advantage
- Experience in Adobe AEM is an added advantage
- Creative problem-solving skills and ability to effectively communicate and translate feedback, needs and solutions
- Excellent verbal and written communication skills required
- Provide strategic thinking and leadership pertaining in cloud technologies and building scalable websites
- Provide technical leadership and guidance internally to the team and externally to management
- Communicate effectively across all levels of the organization
- Collaborate within and across teams with strong teamwork orientation
- Help maintain a culture of high quality
- Showcasing excellent work ethic and strong sense of ownership of end result
- Understand the structured software development methodologies including design, development and testing in an Agile environment
- Follow all security practices and identify areas and processes where performance and scalability can be improved
- Make and justify recommendations for improvement., Participate in development of solutions in cases of new technology direction and/or proof of concepts
- Should translate business needs into technical solutions with Strong analytical, conceptual and technical skills
- Support the evaluation, selection, design, development and maintenance of the platforms.
Job Description
We are looking for a ROR developer. If you're a creative problem solver who is eager to develop amazing products and hungry for a new adventure, a word class workplace is waiting for you.
- Production experience in Ruby.
- A completed technical degree in Computer Science or any related fields.
- 3+ years of professional product development experience.
- Being comfortable with microservices architectures, API-based
- You are a pragmatic programmer who understands what is needed to get things done.
- Problem solving and collaborative mindset.
- Experience working with DevOps (Docker, Kubernetes, Terraform).
- Experience with AWS (RDS, DDB, Lambda, CW, EC2, SQS, SNS, Cognito, Kinesis).
- Experience with performance improvements (Caching Techniques, SQL Query Optimization, Performance monitoring and profiling.
- Deep understanding of service-oriented and microservices architectural patterns, troubleshooting methods and best practices.
- Takes end to end ownership of the development and operation of complete features.
Job Description:
- Must have experience of 1 – 6 years.
- Experience in Java/J2EE platform.
- Experience in web application development with JSP, Servlet, Spring Boot, Hibernate
- Knowledge of MySQL database.
- Developed REST and SOAP web services..
- Experience in version control system and build tool
- Must have completed BE/MCA/M.Sc/MTech.
|
Position |
Responsible for design, development, debugging and implementation of software |
||
|
Summary: |
applications in support of end user’s requirements. Works on problems of relatively |
||
|
|
complex scope, through general usage of standard concepts and principles and |
||
|
|
applications of own judgement. Responsible for delivering results that have direct impact |
||
|
|
on the achievement of results within the job area as an Individual Contributor. |
||
|
Main |
Responsible for driving and leading the analysis, design, development activities on |
||
|
Responsibilities: |
assigned projects. |
|
|
- Involved in entire SDLC lifecycle including analysis, development, fixing and monitoring of issues on the assigned product lines.
- Meets and exceeds standards for the quality and timeliness of the work products.
- Implements, unit tests, debugs and leads integrations of complex code.
- Identify opportunities for further enhancements and refinements to best practices, standards and processes.
- Ensure robust, securely accessible, highly available and highly scalable product that meets or exceeds customer and end-user expectations
Experience
Technical Duties & Responsibilities
3 – 6 Years
With 2-4 years of experience in Scalable Architecture development. We are looking for Independent Contributors, who have good understanding of Microservices based architecture, and a comprehensive awareness of various architectures & their suitability as per product requirements: -
- Can solve problems independently, be responsible for the requirement analysis and design implementation of important business modules, and be familiar with the online deployment environment, able to independently analyse and quickly troubleshoot online faults.
- Familiar with JAVA programming principles, understand their advanced features and class libraries, network and server programming, multi-threaded programming, common open source products
- Experience in SOA EIP using Apache Camel Spring Integration
- High quality coding ability, reusability, low coupling, scalability, high performance, maintainability, high security
- Have the architectural design capabilities of modules and subsystems, master common architectural design methods and patterns
- Understand SOA, event-driven, distributed system principles, large-scale network application structure, message middleware, caching, load balancing, clustering technology, data synchronization, NoSQL
- Have the project experience of RabbitMQ or Kafka
- If the candidate has air fare industry experience, might be priority











