



Responsibilities:
- Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
- Verifying data quality, and/or ensuring it via data cleaning.
- Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
- To design and develop Machine Learning systems and schemes.
- To perform statistical analysis and fine-tune models using test results.
- To train and retrain ML systems and models as and when necessary.
- To deploy ML models in production and maintain the cost of cloud infrastructure.
- To develop Machine Learning apps according to client and data scientist requirements.
- To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.
Technical Knowledge:
- Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase.
- Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
- Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark, Pandas, Numpy and related libraries.
- Expert in visualising and manipulating complex datasets.
- Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
- Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
- Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
- Big data Technologies such as Hadoop stack and Spark.
- Basic use of clouds (VM’s example EC2).
- Brownie points for Kubernetes and Task Queues.
- Strong written and verbal communications.
- Experience working in an Agile environment.

About SmartJoules
About
Energy efficiency is the cleanest, quickest and cheapest way to bring more than 300 million Indians out of energy poverty.
By eliminating waste in their own operations, building owners can save money, simplify operations, improve comfort and free up resources for the less fortunate. Smart Joules makes this process seamless and profitable from day one.
Company video


Connect with the team
Similar jobs

Wissen Technology is hiring for Data Engineer
About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.
Experience: 4-7 years
Notice Period: Immediate- 15 days
Location: Pune, Mumbai, Bangalore
Mode of Work: Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python and Pandas.
- Implement and manage workflows using Airflow.
- Utilize Azure Cloud Services for data storage and processing.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Optimize and scale data infrastructure to meet business needs.
Qualifications and Required Skills:
- Proficiency in Python (Must Have).
- Strong experience with Pandas (Must Have).
- Expertise in Airflow (Must Have).
- Experience with Azure Cloud Services.
- Good communication skills.
Good to Have Skills:
- Experience with Pyspark.
- Knowledge of Kubernetes.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
escription
Job Summary:
Join Springer Capital as a Cybersecurity & Cloud Intern to help architect, secure, and automate our cloud-based backend systems powering next-generation investment platforms.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm. We leverage cutting-edge digital solutions to uncover high-potential opportunities, transforming traditional finance through innovation, agility, and a relentless commitment to security and scalability.
Job Highlights
Work hands-on with AWS, Azure, or GCP to design and deploy secure, scalable backend infrastructure.
Collaborate with DevOps and engineering teams to embed security best practices in CI/CD pipelines.
Gain experience in real-world incident response, vulnerability assessment, and automated monitoring.
Drive meaningful impact on our security posture and cloud strategy from Day 1.
Enjoy a fully remote, flexible internship with global teammates.
Responsibilities
Assist in architecting and provisioning cloud resources (VMs, containers, serverless functions) with strict security controls.
Implement identity and access management, network segmentation, encryption, and logging best practices.
Develop and maintain automation scripts for security monitoring, patch management, and incident alerts.
Support vulnerability scanning, penetration testing, and remediation tracking.
Document cloud architectures, security configurations, and incident response procedures.
Partner with backend developers to ensure secure API gateways, databases, and storage services.
What We Offer
Mentorship: Learn directly from senior security engineers and cloud architects.
Training & Certifications: Access to online courses and support for AWS/Azure security certifications.
Impactful Projects: Contribute to critical security and cloud initiatives that safeguard our digital assets.
Remote-First Culture: Flexible hours and the freedom to collaborate from anywhere.
Career Growth: Build a strong foundation for a future in cybersecurity, cloud engineering, or DevSecOps.
Requirements
Pursuing or recently graduated in Computer Science, Cybersecurity, Information Technology, or a related discipline.
Familiarity with at least one major cloud platform (AWS, Azure, or GCP).
Understanding of core security principles: IAM, network security, encryption, and logging.
Scripting experience in Python, PowerShell, or Bash for automation tasks.
Strong analytical, problem-solving, and communication skills.
A proactive learner mindset and passion for securing cloud environments.
About Springer Capital
Springer Capital blends financial expertise with digital innovation to redefine asset management. Our mission is to drive exceptional value by implementing robust, technology-driven strategies that transform investment landscapes. We champion a culture of creativity, collaboration, and continuous improvement.
Location: Global (Remote)
Job Type: Internship
Pay: $50 USD per month
Work Location: Remote
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
· Maximum 5 years of Information Technology/Technology Operations/Information Security experience required.
· Minimum 3 years of experience in Cybersecurity, Identity & Access Management, Role Based Access Control, and Identity Governance is mandatory.
· Knowledge on User Life Cycle Management, Access provisioning, Access administration is must.
· Experience with technologies such as Role-Based Active Control (RBAC) and Attribute Based Access Control (ABAC) is required.
· Experience in User Access Re-certification activities is mandatory.
· Working knowledge on Active Directory is must.
· Working experience on any IAM tool (SailPoint/Okta/OneIdentity/Varonis/MIM) would be added advantage.
· Knowledge on Identity and Access Management role/processes/tools is must.
· Prior experience in processing IAM requests (Add/Modify/Delete) is must.
· Experienced in Incident management & Change Management processes.
· Knowledge of and the ability to adhere to SAS and SOX audit requirements pertaining to Identity & Access Management job requirements.
· Experience with work-flow management tools such as ServiceNow.
· Leveraging creative thinking and problem solving skills, individual initiative, and utilizing MS Office (Word, Excel, Access, and PowerPoint).
· Understanding personal and team roles; contributing to a positive working environment by building solid relationships with team members; proactively seeking guidance, clarification and feedback.
· Identifying and addressing business needs: building relationships with Stake Holders; developing an awareness of Firm services; communicating with the business/stake holders in an organized and knowledgeable manner; delivering clear requests for information; demonstrating flexibility in prioritizing and completing tasks; and communicating potential conflicts to a supervisor
· Experience performing user administration tasks for various in-house and third-party applications.
· Analyzing, prioritizing, and resolving faults to resolution. Resolve tickets according to SLAs and escalation procedures.
· Strong analytical, problem solving and organizational skills. Be proactive, dynamic, and flexible.
· Good Communication skills, able to articulate well with business and stakeholders.
· Education Qualification : Any graduate/post graduate with Computer Science background.
About hoichoi:
hoichoi is an on-demand video streaming entertainment platform for Bengalis worldwide. With an array of exciting Content choices including Bengali Classics, Blockbusters, Documentaries, Short Films and Exclusive original web series we aim to be home to the best in Bengali entertainment.
Hoichoi is the digital vertical of SVF Entertainment - a leading Media and Entertainment company in East India, with 5 National Awards to its credit and capabilities in Film and TV Production, Cinemas, Distribution, Digital Cinema, Music and New Media.
As we take on the next level of growth, we are on the lookout for passionate and talented people to join our team, committed to redefining online entertainment for Bengalis globally.
Overview:
Being a “customer first” company, the Customer Support division is critical to Hoichoi. This team comprises of enthusiastic, passionate, fun-loving and highly communicative individuals who continuously work towards ensuring a great experience for Hoichoi users at all times.
This is a high commitment role and you will fit right in if you believe in delivering the best service experience and are passionate about Bengali entertainment.
Job Role:
· Answer user communication/queries via emails, live chats and calls
· Ensure minimum turnaround time for resolving user queries & complaints
· Deliver best in class service in the friendliest and timely manner
· Suggest process & product improvements based on user feedback
· Assist the team in reporting & analysis
Qualifications:
· 2+ years’ experience in customer service specializing in Inbound/Outbound support, ideally from an internet start-up/e-commerce background
· Excellent proficiency in Bengali, English & Hindi
· Critical thinking and problem-solving skills
· An understanding of the 'Customer First' principle and committed to consistently delivering high-quality customer support.


- C#, OOPS,MS SQL, MVC
Role & Responsibilities:
- Act as an important team member who can play key role in software development activities.
- Can bridge between Team Lead and Software Engineers
- Understand and implement new requirements with maintaining the existing application
Requirements:
- Strong background with C#, OOPS, Databases and SQL MS SQL
- Sound Knowledge on OOPS concepts and hands-on on OOP.
- Strong skills in troubleshooting the code, problem-solving and thinking skills.
- Nice to have Knowledge SSRS and Crystal Reporting Tool.
- Database concepts and PL/SQL queries (Stored procedures, Functions, Triggers etc).
- .Net Core.
- Entity framework or Other ORM tools.
competitive analysis
Be actively involved in SEO efforts (keyword, image optimization etc.)
Provide creative ideas for content marketing and update the website
Measure the performance of digital marketing efforts using a variety of Web analytics tools (Google Analytics, WebTrends etc.)
planning social media events
preparing social media content
writing blogs
handling social media accounts
Software Development Engineer (III) @ REBEL FOODS
We are surrounded by the world's leading consumer companies led by technology - Amazon for retail, Airbnb for hospitality, Uber for mobility, Netflix and Spotify for entertainment, etc. Food & Beverage is the only consumer sector where large players are still traditional restaurant companies. At Rebel Foods, we are challenging this status quo as we are building the world's most valuable restaurant company on the internet, superfast. The opportunity for us is immense due to the exponential growth in the food delivery business worldwide which has helped us build 'The World's Largest Internet Restaurant Company' in the last few years. Rebel Foods current presence in 7 countries (India, Indonesia, UAE, UK, Malaysia, Singapore, Bangladesh) with 15 + brands and 3500+ internet restaurants has been built on a simple system - The Rebel Operating Model. While for us it is still Day 1, we know we are in the middle of a revolution towards creating never seen before customer-first experiences. We bring you a once-in-a-lifetime opportunity to disrupt the 500-year-old industry with technology at its core.
We urge you to refer to the below to understand how we are changing the restaurant industry before applying at Rebel Foods.
https://jaydeep-barman.medium.com/why-is-rebel-foods-hiring-super-talented-engineers-b88586223ebe">Link
https://jaydeep-barman.medium.com/how-to-build-1000-restaurants-in-24-months-the-rebel-method-cb5b0cea4dc8">Link 1
https://medium.com/faasos-story/winning-the-last-frontier-for-consumer-internet-5f2a659c43db">Link 2
https://medium.com/faasos-story/a-unique-take-on-food-tech-dcef8c51ba41">Link 3
An opportunity to revolutionize the restaurant industry
Here, at Rebel Foods, we are using technology and automation to disrupt the traditional food industry. We are focused on building an operating system for Cloud Kitchens - using the most innovative technologies - to provide the best food experiences for our customers.
You will enjoy working with us, if:
- You are passionate about using technology to solve customer problems
- You are a software craftsman or craftswoman who is obsessed with high quality software
- You have a flair for good design and architecture
- You are unafraid of rearchitecting or refactoring code to improve it
- You are willing to dive deep to solve complex software issues
- You are a teacher and mentor
Our technology ecosystem:
- Languages: Java, Typescript, Javascript, Ruby
- Frameworks: Spring Boot, NodeJS, ExpressJS
- Databases: AWS Aurora, MySQL, MongoDB
- Cloud: AWS
- Microservices, Service Oriented Architecture: REST APIs, Caching, Messaging, Logging, Monitoring and Alerting
- CI/CD and DevOps
- Bitbucket, Jira
You will mostly spend time on the following:
- Leading the design and implementation of software systems
- Driving engineering initiatives across teams with a focus on quality, maintainability, availability, scalability, security, performance and stability
- Writing efficient, maintainable, scalable, high quality code
- Reviewing code and tests
- Refactoring and improving code
- Teaching and mentoring team members
We’re excited about you if you have:
- At least 4+ years of experience in software development, including experience building microservices and distributed systems
- Excellent programming skills in one or more languages: Java, C#, C++, Typescript, Javascript, Python or Ruby
- Experience working in Cloud environments: AWS, Azure, GCP
- Experience building secure, configurable, observable services
- Excellent troubleshooting and problem-solving skills
- The ability to work in an Agile environment
- The ability to collaborate effectively within and across engineering, product and business teams
We value engineers who are:
- Crazy about customer experience
- Willing to challenge the status quo and innovate
- Obsessed with quality, performance and frugality
- Willing to take complete responsibility and ownership of results
- Team players, teachers, mentors
The Rebel Culture
We believe in empowering and growing people to perform the best at their job functions. We follow outcome-oriented, fail-fast iterative & collaborative culture to move fast in building tech solutions. Rebel is not a usual workplace. The following slides will give you a sense of our culture, how Rebel conducts itself and who will be the best fit for our company. We suggest you go through it before making up your mind.
https://www.slideshare.net/">Link 4
- Manage and support delivery of key corporate projects on crèche benefit
- Manage the corporate relationships for clients for whom we manage the annual childcare benefit
- Create and maintain systematic corporate MIS (leads, pipeline, hot conversations) to support in business development
- Reach out to new corporates with ProEves services, support in conducting corporate surveys on childcare and maternity benefits
- Respond to queries of corporates who are looking for providing creche benefit to their employees. Understand their need, share ProEves proposal and follow up
- Liase with daycare partners in ensuring smooth childcare benefit for the client
- Support the Management team in all corporate reach outs - mailers, events, meetings

Role and Responsibilities
- Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
- Build robust RESTful APIs that serve data and insights to DataWeave and other products
- Design user interaction workflows on our products and integrating them with data APIs
- Help stabilize and scale our existing systems. Help design the next generation systems.
- Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
- Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
- Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.
Skills and Requirements
- 8- 15 years of experience building and scaling APIs and web applications.
- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
- Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
- Working knowledge linux server administration as well as the AWS ecosystem is desirable.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.

