- Work experience as Python developer
- Should have experience in developing and working on consumer facing web/app products on in Django framework
- Should have experience working with react js front end design.
- Thorough knowledge of data stores, MyScore AWS services and should have experience with EC2, ELB, AutoScaling, CloudFront, S3
- Experience in Frontend codebases using HTML, CSS and Javascript
- Good understanding of Data Structures, Algorithms and Operating Systems
- Good experience of developing and integrating APIs
- Knowledge of object-relational mapping (ORM)
- Able to integrate multiple data sources and databases into one system (mysql)
- Understanding of front-end technologies, such as JavaScript, HTML5, and CSS3
- Proficient understanding of code versioning tool (git)
- Good problem-solving skills

About Ejohri Jewel Hub
About
Connect with the team
Similar jobs
Bidgely is seeking an outstanding and deeply technical Principal Engineer / Sr. Principal Engineer / Architect to lead the architecture and evolution of our next-generation data and platform infrastructure. This is a senior IC role for someone who loves solving complex problems at scale, thrives in high-ownership environments, and influences engineering direction across teams.
You will be instrumental in designing scalable and resilient platform components that can handle trillions of data points, integrate machine learning pipelines, and support advanced energy analytics. As we evolve our systems for the future of clean energy, you will play a critical role in shaping the platform that powers all Bidgely products.
Responsibilities
- Architect & Design: Lead the end-to-end architecture of core platform components – from ingestion pipelines to ML orchestration and serving layers. Architect for scale (200Bn+ daily data points), performance, and flexibility.
- Technical Leadership: Act as a thought leader and trusted advisor for engineering teams. Review designs, guide critical decisions, and set high standards for software engineering excellence.
- Platform Evolution: Define and evolve the platform’s vision, making key choices in data processing, storage, orchestration, and cloud-native patterns.
- Mentorship: Coach senior engineers and staff on architecture, engineering best practices, and system thinking. Foster a culture of engineering excellence and continuous improvement.
- Innovation & Research: Evaluate and experiment with emerging technologies (e.g., event-driven architectures, AI infrastructure, new cloud-native tools) to stay ahead of the curve.
- Cross-functional Collaboration: Partner with Engineering Managers, Product Managers, and Data Scientists to align platform capabilities with product needs.
- Non-functional Leadership: Ensure systems are secure, observable, resilient, performant, and cost-efficient. Drive excellence in areas like compliance, DevSecOps, and cloud cost optimization.
- GenAI Integration: Explore and drive adoption of Generative AI to enhance developer productivity, platform intelligence, and automation of repetitive engineering tasks.
Requirements:
- 8+ years of experience in backend/platform architecture roles, ideally with experience at scale.
- Deep expertise in distributed systems, data engineering stacks (Kafka, Spark, HDFS, NoSQL DBs like Cassandra/ElasticSearch), and cloud-native infrastructure (AWS, GCP, or Azure).
- Proven ability to architect high-throughput, low-latency systems with batch + real-time processing.
- Experience designing and implementing DAG-based data processing and orchestration systems.
- Proficient in Java (Spring Boot, REST), and comfortable with infrastructure-as-code and CI/CD practices.
- Strong understanding of non-functional areas: security, scalability, observability, and
- compliance.
- Exceptional problem-solving skills and a data-driven approach to decision-making.
- Excellent communication and collaboration skills with the ability to influence at all levels.
- Prior experience working in a SaaS environment is a strong plus.
- Experience with GenAI tools or frameworks (e.g., LLMs, embedding models, prompt engineering, RAG, Copilot-like integrations) to accelerate engineering workflows or enhance platform intelligence is highly desirable.
- We are looking for a strong backend developer with good experience in AWS.
- Should be able to write solid and clean code.
- Should be good with algorithms and architecture.
What We’re Looking For:
- Strong experience in Python (5+ years).
- Hands-on experience with any database (SQL or NoSQL).
- Experience with frameworks like Flask, FastAPI, or Django.
- Knowledge of ORMs, API development, and unit testing
• Testing and evaluating new programs
• Writing and implementing efficient code
• Identifying areas for modification in existing programs and subsequently developing these modifications
• Working closely with other developers, business and systems analysts
• Drive an effective and efficient scrum process where all team members work in the same direction. Ensure efficiency and effectiveness of your team by continuously
improving processes
• Provide fact-based technical feedback on each squad member to managers as part of the evaluation cycle.
• Complete independently complex development tasks and actively contribute to pushing code to production. Write testable, efficient, and reusable code suitable for continuous integration and deployment, respecting best practices and industry development standards.
• Be accountable for code quality, with the assistance of a QA Analyst, by conducting adequate testing Develop a deep understanding of the product roadmap for the squad, including future features to be developed.
• Contribute to high-level estimation and participate in laying out the development sequences, challenging the product roadmap and identifying areas where technical debt can be reduced or avoided.
• Contribute to solution designs, challenging other members on technical decisions and explaining the technical design to junior developers so they can write documentation for the rest of the team.
• Coordinate actively with the solution architect to ensure an appropriate level of validation. Be accountable for performance, reliability, scalability and resilience through SLAs and monitoring
• Raise the bar of professional software development by practicing it and helping others learn the craft through code reviews and coaching. Actively contribute to the internal peer learning platform, to promote continuous learning. Participate in the onboarding of new developers.
• Conduct interviews, document outcomes and help raise the bar for recruits we hire, internal and external resources
• Team player with a high sense of accountability and ownership. Willing to try new things, not afraid to fail, learn from failures and grow. Verbal & non-verbal communication skills
Role Responsibilities:
- Development and Maintenance of REST APIs: Lead the creation and management of our RESTful APIs, ensuring top-notch performance and alignment with evolving requirements.
- Proficiency in Coding: We're in search of expertise in Python or equivalent programming languages. Your coding skills will play a pivotal role in delivering high-quality (efficient, reusable, testable, and scalable) solutions.
- Unit and Integration Testing: Apply your expertise to craft unit and integration tests, upholding code quality and reliability.
- Version Control Systems: Proficiency in Distributed Version Control Systems is vital for seamless collaboration during development.
- Elasticsearch Expertise: Having valuable experience with Elasticsearch is a plus, given its critical role in data retrieval and search functionalities.
- NOSQL Database Familiarity: Knowledge of NOSQL databases like Cassandra and MongoDB will be advantageous.
- Message Broker Knowledge: Understanding message brokers, especially RabbitMQ, is beneficial for effective communication within our systems.
Desired Qualifications:
- Experience: 1-2 years of hands-on experience as a Python developer.
- AWS: Proficiency in AWS cloud management and architecting enterprise data solutions.
- Pragmatic Problem-Solving: Recognize when a solution should be streamlined and when creating the right abstraction will lead to long-term efficiency gains.
- Passion for Quality: Demonstrate dedication to producing work of the highest quality and following best practices.
- Agile/Lean Process: Familiarity with Agile/Lean methodologies is a plus, reflecting your adaptability and collaborative spirit.
- Startup Mindset: Embrace the challenges and opportunities of a startup environment, contributing your skills and insights to our growth.
- Debugging and Optimization: Showcase excellent debugging and optimization capabilities to enhance system performance.
- Tech Awareness: Stay updated on emerging technologies and possess a solid understanding of the full product development life cycle.
- UX and Information Architecture: Exhibit excellent knowledge of mobile user experience, information architecture, and industry trends.
BASIC QUALIFICATIONS
- At least 8+ years of IT experience most of which will have been in helping the design and implementation of the software suite/module.
- 2+ years on any Cloud Platform (AWS, Azure, Google etc).
- Bachelor’s degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics, Physics, or a related field.
- Strong verbal and written communication skills, with the ability to work effectively across internal and external organizations.
- Strong programming skills in Java.
- Strong hands-on experience in integrating multiple databases like Oracle, SQL Server, PostgreSQL etc.
- Deep hands-on experience in the design, development and deployment of business software at scale.
- Customer facing skills to represent AWS well within the customer’s environment and drive discussions with senior personnel regarding trade-offs, best practices, project management and risk mitigation
- Leading/Involved in highly-available and fault-tolerant enterprise and web-scale software applications.
- Experience in performance optimization techniques
- High end Troubleshooting and Communication skills.
- Proven experience with software development life cycle (SDLC) and agile/iterative methodologies required
PREFERRED QUALIFICATIONS
- Implementing experience with primary AWS services (EC2, ELB, RDS, Lambda, API Gateway Route53 & S3).
- AWS Solutions Architect Certified
- Experience in programming languages like Java/Python.
- Demonstrated ability to think strategically about business, product, and technical challenges
- Integration of AWS cloud services with on-premise technologies from Microsoft, IBM, Oracle, HP, SAP etc.
- Experience with IT compliance and risk management requirements (eg. security, privacy, SOX, HIPAA etc.).
- Extended travel to customer locations may be required to sell and deliver professional services as needed
• BE/B. Tech. Computer Science or MCA from a reputed University.
• 8+ Years of experience in software development, with emphasis on JAVA/J2EE Server side
programming.
• Hands on experience in Core Java, Multithreading, RMI, Socket programing, JDBC, NIO,
webservices and Design patterns.
• Should have Knowledge of distributed system, distributed caching, messaging frameworks, ESB
etc.
• Knowledge of Linux operating system and PostgreSQL/MySQL/MongoDB/Cassandra database is
essential.
• Additionally, knowledge of HBase, Hadoop and Hive are desirable.
• Familiarity with message queue systems and AMQP and Kafka is desirable.
• Should have experience as a participant in Agile methodologies.
• Should have excellent written and verbal communication skills and presentation skills.
• This is not a Fullstack requirement, we are purely looking out for Backend resources.
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team









