50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Company Location: Korea (the Republic of)
Form of Employment: Remote, Full Time (Contract Job (Long-term)
Monthly Salary: Negotiable
An information security company headquartered in Seoul, South Korea. MarkAny holds technologies including DRM, anti-forgery of electronic document, digital signature, and digital watermarking. Based on the technologies, MarkAny offers information security products for data protection, document encryption, electronic certification, and copyright protection.
Responsibilities:
- Collaborate with the development team to understand the requirements and design specifications.
- Integrate AI models into the existing VMS system.
- Optimize and maintain the VMS software for performance and reliability.
- Conduct thorough testing and debugging of the VMS system.
- Work closely with the AI team to ensure seamless integration and functionality.
- Provide technical support and troubleshooting for the VMS system.
- Document all development processes, modifications, and updates.
Requirements:
- Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Proven experience as a C++ developer, preferably in a VMS or similar system.
- Strong understanding of AI model integration.
- Proficiency in C++ programming and object-oriented design.
- Experience with video processing and management systems.
- Familiarity with software development lifecycle and agile methodologies.
- Excellent problem-solving skills and attention to detail.
- Strong communication and teamwork skills.
Preferred Qualifications:
- Experience with machine learning frameworks and libraries.
- Knowledge of other programming languages such as Python or Java.
- Previous experience in a similar role within the tech industry.
- Good English communication skills
- Experience in using Atlassian production (Jira, Confluence)
Recommended Skills:
- GStreamer/OpenCV: For video stream handling, processing, and displaying video in your application.
- ONVIF Device Manager SDK/LibVLC: For integrating ONVIF and RTSP streams.
- SQLite/PostgreSQL: For storing metadata and incident records.
About Sun King
Sun King is the world’s leading off-grid solar energy company, providing affordable solar solutions to the 1.8 billion people without reliable access to electricity. By combining product design, fintech, and field operations, Sun King has connected over 20 million homes to solar power across Africa and Asia, adding more than 200,000 new homes each month. Through ‘pay-as-you-go’ financing, customers make small payments to eventually own their solar systems, saving money and reducing reliance on harmful energy sources like kerosene.
Sun King employs 2,800 staff across 12 countries, with expertise in product design, data science, logistics, customer service, and more. The company is expanding its product range to include clean cooking, electric mobility, and entertainment solutions, all while supporting a diverse workforce — with women making up 44% of the team.
About the Role
As a key member of our quality assurance team, you will play a crucial role in ensuring the reliability and performance of our software products. This position offers an excellent opportunity to make a significant impact by developing and implementing automated testing solutions.
In this role, you will be responsible for designing, developing, and executing automated test scripts to ensure the quality of our software applications. You will work closely with our development and product teams to identify and resolve defects, improve testing processes, and drive continuous improvement in our software development lifecycle.
What you will be expected to do
- Develop automation scripts and frameworks for web applications, APIs, and other software solutions using industry-standard tools and technologies.
- Collaborate with software developers, product managers, and quality assurance engineers to ensure comprehensive test coverage.
- Execute automated tests and analyze results to ensure software quality and performance.
- Identify, document, and track software defects to resolution.
- Participate in Agile/Scrum ceremonies such as daily stand-ups, sprint planning, and retrospectives.
- Contribute to continuous improvement initiatives related to testing processes and methodologies.
- Provide mentorship and guidance to junior team members on automation best practices.
You might be a strong candidate if you have/are
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 3+ years of hands-on experience in automation testing.
- Proficiency in automation tools such as Selenium WebDriver, Cypress, or similar.
- Proficient in programming skills in languages like Java, Python, or JavaScript.
- Experience with test management tools and version control systems.
- Solid understanding of Agile/Scrum methodologies and practices.
- ISTQB or similar certification in software testing.
- Experience with CI/CD pipelines and DevOps practices.
- Knowledge of performance testing tools and techniques.
- Good analytical and problem-solving abilities.
- Strong communication and teamwork skills.
What Sun King Offers
- Professional growth in a dynamic, rapidly expanding, high-social-impact industry.
- An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards a profound impact on people and the planet.
- A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
- Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun Center for Leadership.
About AdNabu
We are on a mission to help Shopify Merchants grow their e-commerce business. We have 4 apps currently live in Shopify AppStore with more to follow.
We believe in
- Building a large profitable business: We envision building a capital-efficient, large profitable business to achieve our mission of democratizing marketing. We are one of the few profitable Indian SaaS startups building Value SaaS.
- Employee Wellness <> Customer Success: We give as much importance to each team member’s personal & professional success as much as we care about our customer's success. We believe it’s all about balance.
Our impact so far
- 10000+ active stores using our software
- Profitable for more than 24 months
- 10M+ products updated daily
- Achieved with a small team of 20-25
Compensation
- Total Salary: Rs. 10 - 22 lakhs per annum (based on prior experience and skills)
- Equity will be awarded after 12 months, based on the impact created. We prefer that you hold equity in the company.
Hiring Process
We have 4 steps in total. We will hire you if you pass each round. All the steps except the assignment will be virtual. We expect you to have a stable internet connection and turn on the video during the interview.
- Assignment - If you match the job requirements, we will send you an assignment to complete. This should not take you more than an hour to complete.
- Technical Interviews - There will be two sets of technical interviews. Questions in the first round will evaluate your skill set, and experience and will also include coding. The second round will be focused on system designing and project planning skills.
- CEO Interview - This will be technical as well as general questions.
- Culture Fit Round - A member from a non-technical team will conduct this round. This is also a good opportunity to clarify your doubts about us and our culture.
Responsibilities:
Within 1 month:
- Rapidly onboard and gain a comprehensive understanding of our existing product through training sessions.
- Set up the development environment and successfully deploy your initial code to production.
- Conduct introductory calls with all members of the AdNabu team to foster strong team relationships.
Within 3 months:
- Begin development on your first service with guidance and support from the team.
- Write your first set of unit test cases and establish functional testing workflows.
- Conduct code reviews for your peers.
- Actively participate in bug bashes to gain a thorough understanding of new features under development.
Within 6 months:
- Successfully launch two to three services to production.
- Make architectural and infrastructure decisions with a good impact on the overall product.
- Demonstrate proficiency in navigating our technology stack and infrastructure.
- Assume responsibility for planning, scoping, designing, and implementing new services.
Within 12 months:
- Launch a minimum of 3 to 4 core services to production and take ownership of scaling initiatives.
- Participate in interviewing and hiring processes, contributing to team growth and cultural development.
- Collaborate with leadership across engineering, product, marketing, and customer success to define priorities and establish delivery goals.
Requirements:
- Solid understanding of Computer Science fundamentals, including object-oriented design, data structures, algorithm design, problem-solving, complexity analysis, databases, networking, and distributed systems.
- 2-6 years of experience in product development, specializing in Python and MVC-based web frameworks.
- Proficient with Linux systems, version control, and CI/CD pipelines.
- Experience in designing scalable architectures for data-intensive applications.
- Strong verbal and written communication skills
- Capable of proposing ideas and solutions, actively seeking and incorporating feedback from the team.
- Previous experience in a product-based company or startup is a bonus.
Personality traits we really admire:
- Great attitude to ask questions, learn, and suggest process improvements.
- Attention to detail.
- Equal importance to planning, coding, code reviews, documentation, and testing.
- Highly motivated and coming up with fresh ideas and perspectives to help us move towards our goals faster.
- Adheres to release cycles and absolute commitment to deadlines.
Why should you join AdNabu?
By joining us as a Senior Software Engineer in a growing team, you have the opportunity to make a huge impact by working closely with the leadership team, including the CEO. As we scale our tech team over the next few months, you will have a key role in hiring and taking on bigger responsibilities.
This is what our team members enjoy the most about AdNabu:
- Freedom & Responsibility: If you are a person who wants to take up challenging work & push your personal boundaries, then this is the right place for you.
- Competitive Salary: As AdNabu continues to grow, you’ll have a real opportunity to create wealth for yourself and your family. We'll ensure you are financially well-off in the end.
- Holistic Growth: Building a career doesn’t have to be at the cost of missing out on your personal front. We believe that professional success is worth it when personal goals are nurtured with equal importance. We will support you on that journey of yours.
- Transparency: If you ever wanted to know what it’s like to be on an entrepreneurial journey, then working with AdNabu gives you that opportunity to experience it all firsthand.
- Food & Snacks: We provide Sodexo coupons monthly. This is on top of your salary :)
- Health Insurance: We offer health insurance coverage for you, your spouse, children and parents.
- Flexible leaves & work-from-home: We only care about effective and timely work. Do it from wherever you want to do it. Your home, or a beach in Goa, is up to you :).
If all of this sounds exciting to you, join us for an exciting and equally fulfilling ride at AdNabu!
About the company
Adia makes clinicians better diagnosticians. Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a highly skilled Backend Engineer specializing in integrations and platform development to join our dynamic team. The ideal candidate will have a background working in a complex domain, and have a proven track record of success. This role requires a deep understanding of backend technologies, strong problem-solving skills, and the ability to collaborate effectively with cross-functional teams.
Key Responsibilities
- Design, implement, and maintain scalable and secure integrations with third-party systems and APIs to enable seamless data exchange and functionality.
- Develop and maintain internal platform services and APIs to support various product features and business requirements.
- Collaborate with cross functional teams to ensure smooth integration of backend services with user-facing applications.
- Work closely with product managers and stakeholders to understand integration requirements and translate them into technical solutions.
- Identify opportunities for performance optimization, scalability improvements, and system enhancements within the integration and platform infrastructure.
- Implement monitoring, logging, and alerting solutions to ensure the reliability and availability of integration services and platform components.
- Experience with HL7/FHIR is a huge plus.
Qualifications
- Bachlor's degree in Computer Science, Engineering, or a related field
- Proven experience (4+ years) in backend development with expertise in building integrations and platform services
- Proficiency in Node.js, JavaScript, TypeScript, MongoDB, SQL, noSQL, AWS (or other cloud providers like GCP or Azure)
- Strong problem solving skills and the ability to collaborate effectively with cross-functional teams in an agile enviornment
- Experience working in a complex domain, ideally U.S. Healthcare
- English fluency required
What we are looking for
We’re looking to hire software craftspeople and data engineers. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle including infrastructure technologies in the cloud.
What you’ll be doing
You’ll be working on the data architecture, building ETL pipelines and on a cloud migration strategy. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.
You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases where possible, development, deployment and fixes. You will own the entire stack and take complete ownership of the solution. And, most importantly, you’ll be making a pledge that you’ll never stop learning!
Skills you need in order to succeed in this role
Most Important: Integrity of character, diligence and the commitment to do your best
- Technologies:
- Azure Data Factory
- Python/Java
- SSIS/Apache NiFi (Good to have)
- Experience with:
- Data warehousing and data lake initiatives on the Azure cloud
- Database concepts and optimization of complex queries
- Creating data pipelines
We are seeking a Cloud Architect for a Geocode Service Center Modernization Assessment and Implementation project. The primary objectives of the project are to migrate the legacy Geocode Service Center to a cloud-based solution. Initial efforts will be leading Assessments and Design efforts and ultimately, implementation of approved design.
Responsibilties:
- System Design and Architecture: Design and develop scalable, cloud-based geocoding systems that meet business requirements.
- Integration: Integrate geocoding services with existing cloud infrastructure and applications.
- Performance Optimization: Optimize system performance, ensuring high availability, reliability, and efficiency.
- Security: Implement robust security measures to protect geospatial data and ensure compliance with industry standards.
- Collaboration: Work closely with data scientists, developers, and other stakeholders to understand requirements and deliver solutions.
- Innovation: Stay updated with the latest trends and technologies in cloud computing and geospatial analysis to drive innovation.
- Documentation: Create and maintain comprehensive documentation for system architecture, processes, and configurations.
Requirements:
- Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Geography, or a related field.
- Technical Proficiency: Extensive experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and geocoding tools like Precisely, ESRI etc.
- Programming Skills: Proficiency in programming languages such as Python, Java, or C#.
- Analytical Skills: Strong analytical and problem-solving skills to design efficient geocoding systems.
- Experience: Proven experience in designing and implementing cloud-based solutions, preferably with a focus on geospatial data.
- Communication Skills: Excellent communication and collaboration skills to work effectively with cross-functional teams.
- Certifications: Relevant certifications in cloud computing (e.g., AWS Certified Solutions Architect) and geospatial technologies are a plus.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link:https://zrec.in/il0hc?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
Key Responsibilities
• Lead the automation testing effort of our cloud management platform.
• Create and maintain automation test cases and test suites.
• Work closely with the development team to ensure that the automation tests are integrated into the development process.
• Collaborate with other QA team members to identify and resolve defects.
• Implement automation testing best practices and continuously improve the automation testing framework.
• Develop and maintain automation test scripts using programming languages such as Python.
• Conduct performance testing using tools such as JMeter, Gatling, or Locust.
• Monitor and report on automation testing and performance testing progress and results.
• Ensure that the automation testing and performance testing strategy aligns with overall product quality goals and objectives.
• Manage and mentor a team of automation QA engineers.
Requirements
• Bachelor's degree in Computer Science or a related field.
• Minimum of 8+ years of experience in automation testing and performance testing.
• Experience in leading and managing automation testing teams.
• Strong experience with automation testing frameworks including Robot Framework.
• Strong experience with programming languages, including Python.
• Strong understanding of software development lifecycle and agile methodologies.
• Experience with testing cloud-based applications.
• Good understanding of Cloud services & ecosystem, specifically AWS.
• Experience with performance testing tools such as JMeter, Gatling, or Locust.
• Excellent analytical and problem-solving skills.
• Excellent written and verbal communication skills.
• Ability to work independently and in a team environment.
• Passionate about automation testing and performance testing.
We are seeking a Junior Software Engineer (AWS, Azure, Google Cloud,Spring, Node.js, Django) to join our dynamic team. As a Junior Software Engineer will have a passion for technology, a solid understanding of software development principles, and a desire to learn and grow in a collaborative environment. You will work closely with senior engineers to develop, test, and maintain software solutions that meet the needs of our clients and internal stakeholders.
Responsibilties:
- Software Development: Write clean, efficient, and well-documented code for various software applications.
- Testing & Debugging: Assist in testing and debugging software to ensure functionality, performance, and security.
- Learning & Development: Continuously improve your technical skills by learning new programming languages, tools, and AI methodologies.
- Documentation: Assist in the documentation of software designs, technical specifications, and user manuals.
- Problem-Solving: Identify and troubleshoot software defects and performance issues.
- Customer Communication: Interact with customers to gather requirements, provide technical support, and ensure their needs are met throughout the software development lifecycle. Maintain a professional and customer-focused attitude in all communications.
Requirements:
- Education: Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Programming Languages: Proficiency in at least one programming language such as Java, Python, TypeScript or JavaScript.
- Familiarity with: Git version control system, Scrum software development methodology, and basic understanding of databases and SQL.
- Problem-Solving Skills: Strong analytical and problem-solving skills with a keen attention to detail.
- Communication: Good verbal and written communication skills with the ability to work effectively in a team environment and interact with customers.
- Adaptability: Ability to learn new technologies and adapt to changing project requirements.
- Internship/Project Experience: Previous internship experience or project work related to software development is a plus.
Preferred Skills:
- Experience with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Familiarity with back-end frameworks (e.g., Spring, Node.js, Django).
- Knowledge of DevOps practices and tools.
Benefits:
- Work Location: Remote
- 5 days wortking
You can apply directly through the link: https://zrec.in/F57mD?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
We are seeking a Data Engineer ( Snowflake, Bigquery, Redshift) to join our team. In this role, you will be responsible for the development and maintenance of fault-tolerant pipelines, including multiple database systems.
Responsibilities:
- Collaborate with engineering teams to create REST API-based pipelines for large-scale MarTech systems, optimizing for performance and reliability.
- Develop comprehensive data quality testing procedures to ensure the integrity and accuracy of data across all pipelines.
- Build scalable dbt models and configuration files, leveraging best practices for efficient data transformation and analysis.
- Partner with lead data engineers in designing scalable data models.
- Conduct thorough debugging and root cause analysis for complex data pipeline issues, implementing effective solutions and optimizations.
- Follow and adhere to group's standards such as SLAs, code styles, and deployment processes.
- Anticipate breaking changes to implement backwards compatibility strategies regarding API schema changesAssist the team in monitoring pipeline health via observability tools and metrics.
- Participate in refactoring efforts as platform application needs evolve over time.
Requirements:
- Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field.
- 3+ years of professional experience with a cloud database such as Snowflake, Bigquery, Redshift.
- +1 years of professional experience with dbt (cloud or core).
- Exposure to various data processing technologies such as OLAP and OLTP and their applications in real-world scenarios.
- Exposure to work cross-functionally with other teams such as Product, Customer Success, Platform Engineering.
- Familiarity with orchestration tools such as Dagster/Airflow.
- Familiarity with ETL/ELT tools such as dltHub/Meltano/Airbyte/Fivetran and DBT.
- High intermediate to advanced SQL skills (comfort with CTEs, window functions).
- Proficiency with Python and related libraries (e.g., pandas, sqlalchemy, psycopg2) for data manipulation, analysis, and automation.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link:https://zrec.in/e9578?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Alternative Path is looking for an application developer, to assist one of its clients, which is a SAAS platform helping alternative investment firms to streamline their document collection and data extraction process using Machine Learning. You will work with individuals in various departments of the company to define and craft new products and features for our platform, and to improve existing ones. You will have a large degree of independence and trust, but you won't be isolated, the support of the Engineering team leads, the Product team leads, and every other technology team member is behind you.
You will bring your projects from initial conception through all the cycles of development from project definition to development, debugging, initial release and subsequent iteration. You will also take part in shaping the architecture of the product, including our deployment infrastructure, to fit the growing needs of the platform.
Key Responsibilities
- Develop front and back-end-related product features for optimal user experience
- Design intuitive user interactions on web pages
- Spin up servers and databases while ensuring stability and scalability of applications
- Work alongside graphic designers to enhance web design features
- Oversee and drive projects from conception to finished product
- Design and develop APIs
- Brainstorm, execute and deliver solutions that meet both technical and consumer needs
- Staying abreast of developments in web applications and programming languages
Desired Skills
- 5-7 years of web application development experience
- Python development and architecture
- Prior work experience of working on Django or Flask framework
- Knowledge of React, JavaScript, HTML & CSS
- Familiar with agile development environment, continuous integration and continuous deployment
- Familiar with OOP, MVC, and commonly used design patterns
- Knowledge of SQL and relational databases
- Experience of working on one or more AWS services like - AWS EC2, AWS S3, AWS
- Managed Redis, AWS Elastic Search, AWS Managed Airflow, RDS, S3 is preferred but not mandatory
- Comfortable with continuous integration, automated testing, source control, and other DevOps methodologies
Gevme is a Singapore-based fast-growing leading event management platform. It is used by event professionals worldwide to build, operate and monetise events for some of the biggest brands. The flexibility of the platform provides them with limitless possibilities to turn any event idea into reality. We have already powered hundreds of thousands of events around the world for clients like Facebook, Netflix, Starbucks, Forbes, MasterCard, Singapore Government.
We are a product company with a strong engineering and family culture; we are always looking for new ways to enhance the event experience and empower efficient event management. We’re on a mission to groom the next generation of event technology thought leaders as we grow.
Join us if you want to become a part of a vibrant and fast-moving product company that's on a mission to connect people around the world through events.
Job Summary:
Responsibilities:
- Develop and maintain automated test scripts using industry-standard automation tools/frameworks (e.g., Selenium WebDriver, Cypress, TestNG, JUnit, etc.).
- Collaborate with cross-functional teams to define test requirements, acceptance criteria, and test scenarios.
- Identify and prioritize test cases for automation based on risk, impact, and frequency of use.
- Execute automated test suites and analyze test results to identify defects, performance issues, and areas for improvement.
- Work closely with developers to troubleshoot and resolve software defects in a timely manner.
- Participate in code reviews, sprint planning, and release activities to ensure product quality and stability.
- Continuously research and evaluate emerging QA automation tools, technologies, and best practices to enhance our testing processes.
- Contribute to the development and maintenance of test automation frameworks, libraries, and utilities.
- Mentor and provide technical guidance to junior members of the QA team.
- Communicate test status, progress, and issues effectively to project stakeholders.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field.
- Proven experience (at least 2 years) as a QA Automation Engineer or similar role, with a focus on web application testing.
- Strong proficiency in at least one programming language (e.g., Java, Python, JavaScript) for test automation.
- Hands-on experience with automation frameworks/tools such as Selenium WebDriver, Cypress, TestNG, JUnit, etc.
- Solid understanding of software testing methodologies, QA best practices, and agile development processes.
- Experience with version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) pipelines.
- Excellent analytical, problem-solving, and debugging skills.
- Strong attention to detail and a passion for delivering high-quality software products.
- Excellent written and verbal communication skills.
- Ability to work effectively both independently and as part of a collaborative team environment.
Role
You will develop and maintain the key backend code and infrastructure of the company stack. You will implement AI solutions like LLMs for various tasks such as voice-based interactive systems, chatbots, and AI web apps. Ability to see projects through from start to finish with good organizational skills and attention to detail. This is a perfect role for someone who likes to build state-of-the-art AI products and work with cutting-edge AI technologies like GPT, LLAMA, etc
Qualifications
- BS or MS in Computer Science or relevant field.
- 4+ years experience in backend software development
- Be able to design high-throughput scalable backend systems
- Eagerness to learn applied AI technologies like LLMs, prompt engineering, etc
- Proficiency in Python.
- Experience with cloud computing platforms (AWS, GCP) and technologies like Docker
- Knowledge of Rest APIs, databases (mysql, mongo, vectorDB)
We are seeking an Application Developer/Software Engineer with strong technical experience in all phases of the software development life cycle (SDLC) with a demonstrated technical expertise in one or more areas of state-of-the-art software development technology.
Responsibilties:
- Provides activities related to enterprise full life-cycle software development projects.
- Develops detailed functional and technical requirements for client-server and web software applications.
- Conduct detailed analyses and module-level specification development of software requirements.
- Define and implement high-performance and highly scalable product/application architectures and lead operational, tactical, and strategic integration activities.
- Perform complex programming and analysis for web and mobile applications and ETL processing; define requirements; write program specifications; design, code, test, and debug programming assignments; document programs.
- Supervise the efforts of other developers in major system development projects; determine and analyze functional requirements; determine proposed solutions information processing requirements; and optimize system performance.
- The work task could include total custom development, customization as needed for COTS, report development, data conversion, and support of legacy applications.
Requirements:
- Can code at an intermediate or expert level in applications such as C#, ASP.NET, .NET Core, SQL, Python, Java, React, TypeScript, CSS/JavaScript, Git, Azure, Knockout, MarkLogic, ORACLE, etc.
- 4+ years’ experience or specific educational background sufficient to demonstrate competency with Microsoft technology, including ASP.
- Experience with Artificial Intelligence (AI)/Machine Learning (ML), SharePoint.
- knowledge of HTML, XHTML, XML, XSLT, .NET Framework, Visual Studio, JavaScript.
- 4+ years with Cloud technologies such as Azure / AWS / Google Cloud.
- Proficient with appropriate programming languages, particularly ASP.
- NET and modern web frameworks like React.
- Comfortable with Object Oriented Programming and Software Patterns.
- Excellent interpersonal skills.
- High motivation and ability to work with teams to meet project objectives.
- Ability to work on multiple projects simultaneously.
- Ability to meet project deadlines and goals without management supervision.
- Awareness of database design concepts and proficiency in a general cloud environment.
Educational Requirements:
- BS in a field related to computer science or information systems, or advanced degree, or additional specific training and/or certification in 4th generation computing language.
- Must be able to define and implement high-performance and highly scalable product/application architectures, and able to lead integration activities for operational, tactical, and strategic systems.
- Able to develop detailed functional and technical requirements for client-server and web software applications and conduct detailed analyses and module-level specification development of software requirements.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link : https://zrec.in/RlUkC?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
We are seeking a talented Senior DevOps Engineer to join our team. The ideal candidate will play a crucial role in enhancing our infrastructure and ensuring seamless deployment processes.
Responsibilities:
- Design, implement, and maintain automation scripts for build, deployment, and configuration management.
- Collaborate with software developers to optimize application performance.
- Manage cloud infrastructure and service.
- Implement best practices for security and compliance.
- Proficiency in NoSQL databases- Knowledge of server management and shell scripting.
- Experience in cloud computing environments.
- Familiarity with .NET framework is a plus.
- Migrate existing cloud infrastructure to Infrastructure as Code (IaC) via tools like Terraform.
- Build out and enhance scalable containerized infrastructure using Kubernetes (k8s) and EKS.
- Assist software engineers with migration of applications to leverage configuration management and configuration as code (CaC) using tools like Docker.
- Configure and optimize CI/CD pipelines to minimize lead time for changes, including pipelines for infrastructure changes.
- Ensure applications and infrastructure are properly instrumented via observability tooling to improve alerting, monitoring, and incident response.
- Recommend infrastructure improvements to ensure architectures are centred around customer needs, and improve overall cloud architecture goals around availability, scalability, reliability, security and costs, and metrics like MTTR, MTBF, etc.
- Automate existing manual workflows and implement controls around them in line with the company's security and compliance goals.
Requirements:
- 10+ years of experience in DevOps or SRE fields, with current experience supporting engineering teams implementing scalable cloud architectures using CaC, IaC, and Kubernetes.
- Strong proficiency w/ AWS (certification required), including expert-level knowledge of cloud networking and security best practices.
- Strong programming and/or scripting experience in languages like Python and bash, including extensive experience with source code management and deployment pipelines.
- Extensive experience leveraging observability tooling for logging, monitoring, alerting, incident management, and escalation.
- Exceptional debugging and troubleshooting ability.
- Familiarity with web application architecture concepts, such as databases, message queues, serverless
- Ability to work with Engineering teams to identify and resolve performance constraints.
- Experience managing cloud infrastructure for SaaS applications.
- Experience leading a major cloud migration (on-premise to cloud, poly-cloud, etc.) preferred.
- Experience enabling a continuous deployment capability in a previous role preferred.
- Demonstrated experience guiding and leading other DevOps, cloud, and software engineers, leveling up technical proficiency and overall cloud capabilities.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link: https://zrec.in/U2vjo?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
at HighLevel Inc.
Who We Are:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel- https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 450K million businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We work at scale; our infrastructure handles around 3 Billion+ API hits & 2 Billion+ message events monthly and over 25M views of customer pages daily. We also handle over 80 Terabytes of data across 5 Databases.
About the role
We are looking for a senior AI engineer for the platform team. The ideal candidate will be responsible for deriving key insights from huge sets of data, building models around those, and taking them to production.
● Implementation
- Analyze data to gain new insights, develop custom data models and algorithms to apply to distributed data sets at scale
- Build continuous integration, test-driven development, and production deployment frameworks
● Architecture- Design the architecture with the Data and DevOps engineers
● Ownership- Take ownership of the accuracy of the models and coordinate with the stakeholders to keep moving forward
● Releases- Take ownership of the releases and pre-plan according to the needs
● Quality- Maintain high standards of code quality through regular design and code reviews.
Qualifications
- Extensive hands-on experience in Python/R is a must.
- 3 years of AI engineering experience
- Proficiency in ML/DL frameworks and tools (e. g. Pandas, Numpy, Scikit-learn, Pytorch, Lightning, Huggingface)
- Strong command in low-level operations involved in building architectures for Ensemble Models, NLP, CV (eg. XGB, Transformers, CNNs, Diffusion Models)
- Experience with end-to-end system design; data analysis, distributed training, model optimisation, and evaluation systems, for large-scale training & prediction.
- Experience working with huge datasets (ideally in TeraBytes) would be a plus.
- Experience with frameworks like Apache Spark (using MLlib or PySpark), Dask etc.
- Practical experience in API development (e. g., Flask, FastAPI .
- Experience with MLOps principles (scalable development & deployment of complex data science workflows) and associated tools, e. g. MLflow, Kubeflow, ONNX
- Bachelor's degree or equivalent experience in Engineering or a related field of study
- Strong people, communication, and problem-solving skills
What to Expect when you Apply
● Exploratory Call
● Technical Round I/II
● Assignment
● Cultural Fitment Round
EEO Statement:
At HighLevel, we value diversity. In fact, we understand it makes our organization stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
We are looking for an experienced Senior API Developer with expertise in Python to join our talented development team. The candidate should have over 8 years of hands-on experience in designing and developing APIs, with a strong emphasis on performance and scalability. Proficiency with Swagger/OpenAPI specifications is required. This role is pivotal to our mission of delivering top-tier solutions that power our core business functions.
Key Responsibilities:
- API Development: Design, develop, and maintain high-quality APIs using Python frameworks such as Django Rest Framework or FastAPI.
- Documentation with Swagger/OpenAPI: Employ Swagger/OpenAPI tools to produce clear and thorough API documentation.
- Integration: Implement and manage integrations between APIs and various internal and external services.
- Optimize Performance: Monitor and enhance the performance of APIs to ensure fast response times and low latency.
- Team Collaboration: Work closely with other developers, product owners, and stakeholders to understand requirements and deliver optimal solutions.
- Code Quality: Uphold best practices in code quality, testing, and deployment. Participate in peer code reviews.
- Debugging and Support: Troubleshoot and resolve issues in development, test, and production environments.
- Continuous Learning: Stay informed about emerging technologies and methodologies in Python and API development.
Qualifications:
- Education: Bachelor's degree in Computer Science, Information Technology, or a related discipline.
- Experience: At least 8 years of professional experience in API development with a focus on Python.
Technical Skills:
- Proficiency in Python programming language.
- Strong experience with API design, development, and RESTful services.
- Expertise with Swagger/OpenAPI specifications.
- Familiarity with Python frameworks like Django, Flask, or FastAPI.
- Experience with ORMs such as SQLAlchemy or Django ORM.
- Knowledge of version control systems, particularly Git.
- Experience working with relational databases like PostgreSQL or MySQL.
Soft Skills:
- Excellent problem-solving and analytical skills.
- Strong verbal and written communication.
- Ability to collaborate effectively within a team and across departments.
Preferred Qualifications:
- Experience with cloud platforms like AWS, GCP, or Azure.
- Knowledge of microservices architecture and distributed systems.
- Familiarity with CI/CD pipelines and tools like Jenkins or GitLab CI.
- Understanding of asynchronous programming and event-driven architecture.
What We Offer:
- Competitive Compensation: Reflective of your experience and the value you bring.
- Growth Opportunities: Support for professional development, training, and certifications.
- Flexible Work Environment: Remote work options and flexible hours to promote work-life balance.
Job Title: Full Stack Developer
Location: Pune
Job Type: Full-Time
Company: Unity Wealth
About Unity Wealth: Hey Pune! We're an early-stage Gen AI startup set to revolutionise productivity in financial services. Our mission is to empower financial advisers to scale and optimize their services through innovative technology solutions.
What You’ll Do:
- 👨💻 Create web and mobile apps that will make your friends jealous.
- 🧠 Work with our genius AI and product engineering team to integrate the latest tech.
- 🚀 Have fun while making a real impact.
Responsibilities:
- Design, develop, and maintain scalable and high-performance full-stack modules from frontend to backend.
- Architect and implement cloud-based solutions (AWS, Azure, Google Cloud) to support the scalability, availability, and security requirements of our applications.
- Collaborate with product managers, designers, and other stakeholders to understand requirements and translate them into technical specifications.
- Design and develop RESTful APIs and integrate with microservices architecture.
- Ensure the technical feasibility of UI/UX designs and maintain graphic standards and branding throughout the product's interface.
- Maintain code integrity and organization, including code version control.
- Write clean, efficient, and well-documented code following industry best practices and coding standards.
- Perform code reviews, identify areas for improvement, and suggest solutions to enhance application performance and usability.
- Implement and maintain CI/CD pipelines to automate deployments.
Requirements:
- Minimum 4 years of experience in full-stack development with a focus on backend technologies.
- B. E./B. Tech/MSc in Computer Science/Engineering.
- Strong programming skills in HTML, CSS, JavaScript, Node.js, and Python.
- Experience designing and building microservices-based architectures.
- Experience in building highly efficient and secure RESTful APIs.
- Proficiency with any flavour of Linux such as Ubuntu, CentOS, RedHat, or SuSE.
- Knowledge of software development best practices, design patterns, and principles.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Prior startup experience is a plus.
- An all-around great human with strong moral principles and a team player attitude 🥇
Why Us?
- Competitive salary and benefits (we know you deserve it) 💰
- Flexible remote work options (work in your PJs, we don’t mind) 🏠
- Be part of something revolutionary and have fun doing it 🔥
Thirumoolar software is seeking talented AI researchers to join our cutting-edge team and help drive innovation in artificial intelligence. As an AI researcher, you will be at the forefront of developing intelligent systems that can solve complex problems and uncover valuable insights from data.
Responsibilities:
Research and Development: Conduct research in AI areas relevant to the company's goals, such as machine learning, natural language processing, computer vision, or recommendation systems. Explore new algorithms and methodologies to solve complex problems.
Algorithm Design and Implementation: Design and implement AI algorithms and models, considering factors such as performance, scalability, and computational efficiency. Use programming languages like Python, Java, or C++ to develop prototype solutions.
Data Analysis: Analyze large datasets to extract meaningful insights and patterns. Preprocess data and engineer features to prepare it for training AI models. Apply statistical methods and machine learning techniques to derive actionable insights.
Experimentation and Evaluation: Design experiments to evaluate the performance of AI algorithms and models. Conduct thorough evaluations and analyze results to identify strengths, weaknesses, and areas for improvement. Iterate on algorithms based on empirical findings.
Collaboration and Communication: Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to integrate AI solutions into our products and services. Communicate research findings, technical concepts, and project updates effectively to stakeholders.
Preferred Location: Chennai
About the Role
We are actively seeking talented Senior Python Developers to join our ambitious team dedicated to pushing the frontiers of AI technology. This opportunity is tailored for professionals who thrive on developing innovative solutions and who aspire to be at the forefront of AI advancements. You will work with different companies in the US who are looking to develop both commercial and research AI solutions.
Required Skills:
- Write effective Python code to tackle complex issues
- Use business sense and analytical abilities to glean valuable insights from public databases
- Clearly express the reasoning and logic when writing code in Jupyter notebooks or other suitable mediums
- Extensive experience working with Python
- Proficiency with the language's syntax and conventions
- Previous experience tackling algorithmic problems
- Nice to have some prior Software Quality Assurance and Test Planning experience
- Excellent spoken and written English communication skills
The ideal candidates should be able to
- Clearly explain their strategies for problem-solving.
- Design practical solutions in code.
- Develop test cases to validate their solutions.
- Debug and refine their solutions for improvement.
About Davis Index
Davis Index is a market intelligence platform and publication that provides price benchmarks for recycled materials and primary metals.
Our team of dedicated reporters, analysts, and data specialists publish and process over 1,400 proprietary price indexes, metals futures prices, and other reference data including market intelligence, news, and analysis through an industry-leading technology platform.
About the role
Here at Davis Index, we look to bring true, accurate market insights, news and data to the recycling industry. This enables sellers and buyers to boost their margins, and access daily market intelligence, data analytics, and news.
We’re looking for a keen data expert who will take on a high-impact role that focuses on end-to-end data management, BI and analysis tasks within a specific functional area or data type. If taking on challenges in building, extracting, refining and very importantly automating data processes is something you enjoy doing, apply to us now!
Key Role
Data visualization - Power BI, Tableau,Python
DB Management - SQL, MangoDB,
Data collection, Cleaning, Modelling , Analysis
Programming Languages and Tools: Python, R, VBA, Appscript, Excel, Google sheets
What you will do in this role
- Build and maintain data pipelines from internal databases.
- Data mapping of data elements between source and target systems.
- Create data documentation including mappings and quality thresholds.
- Build and maintain analytical SQL/MongoDB queries, scripts.
- Build and maintain Python scripts for data analysis/cleaning/structuring.
- Build and maintain visualizations; delivering voluminous information in comprehensible forms or in ways that make it simple to recognise patterns, trends, and correlations.
- Identify and develop data quality initiatives and opportunities for automation.
- Investigate, track, and report data issues.
- Utilize various data workflow management and analysis tools.
- Ability and desire to learn new processes, tools, and technologies.
- Understanding fundamental AI and ML concepts.
Must have experience and qualifications
- Bachelor's degree in Computer Science, Engineering, or Data related field required.
- 2+ years’ experience in data management.
- Advanced proficiency with Microsoft Excel and VBA/ Google sheets and AppScript
- Proficiency with MongoDB/SQL.
- Familiarity with Python for data manipulation and process automation preferred.
- Proficiency with various data types and formats including, but not limited to JSON.
- Intermediate proficiency with HTML/CSS.
- Data-driven strategic planning
- Strong background in data analysis, data reporting, and data management coupled with the adept process mapping and improvements.
- Strong research skills.
- Attention to detail.
What you can expect
Work closely with a global team helping bring market intelligence to the recycling world. As a part of the Davis Index team we look to foster relationships and help you grow with us. You can also expect:
- Work with leading minds from the recycling industry and be part of a growing, energetic global team
- Exposure to developments and tools within your field ensures evolution in your career and skill building along with competitive compensation.
- Health insurance coverage, paid vacation days and flexible work hours helping you maintain a work-life balance
- Have the opportunity to network and collaborate in a diverse community
Apply Directly using this link : https://nyteco.keka.com/careers/jobdetails/54122
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
About The Role
To design, implement, and execute testing procedures for our software applications. In this role, the candidate will be instrumental in driving our software quality assurance lifecycle, collaborating with development teams to establish test strategies, and developing automated tests to uphold our stringent quality benchmarks, thereby reducing manual regression efforts.
By integrating tests into the CI/CD pipeline, the candidate will ensure that software releases are reliable and of high quality. Additionally, the candidate will troubleshoot and diagnose issues in systems under test, contributing to the continuous improvement of the software development process.
What Describes You Best
- Minimum Bachelor's degree in Computer Science, Engineering, or a related discipline.
- 2 to 3 years experience in Automation Testing.
- Experience of working on SAAS /enterprise products is preferred.
Technical Skills: (must have)
- Sound understanding of SDLC processes and the QA lifecycle and methodology
- Hands-on experience with test Automation tools and frameworks such as Selenium WebDriver (with Java), Cucumber, Appium, or TestNG
- Proven experience in test automation using Java scripting language
- Strong Understanding of DOM
- Good experience with continuous integration/continuous deployment (CI/CD) concepts and tools like Jenkins or GitLab CI.
- Hands-on experience with any of the bug tracking and test management tools (e.g. GitLab, Jira, Jenkins, Bugzilla, etc.)
- Experience with API testing (Postman or Similar RESTClient)
Additional Skills: (nice to have)
- Knowledge of performance testing tools such as JMeter
- Knowledge of Serenity BDD Framework
- Knowledge of Python programming language
What will you Own
The key accountability of the candidate will be to maintain and enhance the QA automation process (along with CI/CD/CT), create/update test suites, write documentation, and ensure quality delivery of our software components by automation testing and also contribute to manual testing when required. Furthermore, enhance the product by utilizing automation scripting in solution development and improving processes/workflows.
How will you spend your time at Eclat
QA and Documentation
- Sketching out ideas for automated software test procedures.
- Enhancing, Optimizing, and maintaining automated CI/CD/CT workflows.
- Write, design, execute, and maintain automation scripts for web and mobile platforms.
- Maximizing test coverage for the most critical features of the application to reduce manual testing effort and quick regression.
- Reviewing software bug reports, maintaining reporting of automation test suites, and highlighting problem areas.
- Manage and Troubleshooting issues in systems under test.
- Establishing and coordinating test strategies with development/product teams.
- Manage documentation repositories and version control systems.
Post-delivery participation - Training and User Feedback
- Participating in user feedback sessions to identify and understand user persona and requirements.
- Working closely with the support team in providing necessary product technical support.
Why Join Us
- Be a part of our growth story as we aim to take a leadership position in international markets
- Opportunity to manage and lead global teams and channel partner network
- Join technology innovators who believe in solving world-scale challenges to drive global knowledge-sharing
- Healthy work/life balance, offering wellbeing initiatives, parental leave, career development assistance, required work infrastructure support
About the Company :
Nextgen Ai Technologies is at the forefront of innovation in artificial intelligence, specializing in developing cutting-edge AI solutions that transform industries. We are committed to pushing the boundaries of AI technology to solve complex challenges and drive business success.
Currently offering "Data Science Internship" for 2 months.
Data Science Projects details In which Intern’s Will Work :
Project 01 : Image Caption Generator Project in Python
Project 02 : Credit Card Fraud Detection Project
Project 03 : Movie Recommendation System
Project 04 : Customer Segmentation
Project 05 : Brain Tumor Detection with Data Science
Eligibility
A PC or Laptop with decent internet speed.
Good understanding of English language.
Any Graduate with a desire to become a web developer. Freshers are welcomed.
Knowledge of HTML, CSS and JavaScript is a plus but NOT mandatory.
Fresher are welcomed. You will get proper training also, so don't hesitate to apply if you don't have any coding background.
#please note that THIS IS AN INTERNSHIP , NOT A JOB.
We recruit permanent employees from inside our interns only (if needed).
Duration : 02 Months
MODE: Work From Home (Online)
Responsibilities
Manage reports and sales leads in salesforce.com, CRM.
Develop content, manage design, and user access to SharePoint sites for customers and employees.
Build data driven reports, store procedures, query optimization using SQL and PL/SQL knowledge.
Learned the essentials to C++ and Java to refine code and build the exterior layer of web pages.
Configure and load xml data for the BVT tests.
Set up a GitHub page.
Develop spark scripts by using Scala shell as per requirements.
Develop and A/B test improvements to business survey questions on iOS.
Deploy statistical models to various company data streams using Linux shells.
Create monthly performance-base client billing reports using MySQL and NoSQL databases.
Utilize Hadoop and MapReduce to generate dynamic queries and extract data from HDFS.
Create source code utilizing JavaScript and PHP language to make web pages functional.
Excellent problem-solving skills and the ability to work independently or as part of a team.
Effective communication skills to convey complex technical concepts.
Benefits
Internship Certificate
Letter of recommendation
Stipend Performance Based
Part time work from home (2-3 Hrs per day)
5 days a week, Fully Flexible Shift
We are seeking a skilled DevOps Engineer with 3+ years of experience to join our team on a permanent work-from-home basis.
Responsibilities:
- Develop and maintain infrastructure using Ansible.
- Write Ansible playbooks.
- Implement CI/CD pipelines.
- Manage GitLab repositories.
- Monitor and troubleshoot infrastructure issues.
- Ensure security and compliance.
- Document best practices.
Qualifications:
- Proven DevOps experience.
- Expertise with Ansible and CI/CD pipelines.
- Proficient with GitLab.
- Strong scripting skills.
- Excellent problem-solving and communication skills.
Regards,
Aishwarya M
Associate HR
Who are we?
We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.
What we are looking for
We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.
What you’ll be doing
First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.
You will work in a product team. Building products and rapidly rolling out new features and fixes.
You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!
Skills you need in order to succeed in this role
Most Important: Integrity of character, diligence and the commitment to do your best
Must Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development
Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing
Self-Learner: You must be extremely hands-on and obsessive about delivering clean code
- Sense of Ownership: Do whatever it takes to meet development timelines
- Experience in creating end to end data pipeline
- Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
- Working experience in Databricks
- Strong in BI/DW/Datalake Architecture, design and ETL
- Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
- Experience in object-oriented programming, data structures, algorithms and software engineering
- Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
- Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
- Working knowledge of server configuration / deployment
- Experience using source control and bug tracking systems,
writing user stories and technical documentation
- Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
- Expertise in creating tables, procedures, functions, triggers, indexes, views, joins and optimization of complex
- Experience with database versioning, backups, restores and
- Expertise in data security and
- Ability to perform database performance tuning queries
product base company based at Bangalore location and working
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment
Must have:
- 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
- Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
- Good understanding of Generics, OOPs concepts & Design Patterns
- Solid engineering and coding skills. Ability to write high-performance production quality code in Python
- Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
- Strong expertise in time series forecasting using stat, ML, DL and foundation models
- Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
- Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
- Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
- Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
- Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
- Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders
Good To Have:
- Experience with MLOps tools and practices for continuous integration and deployment of ML models.
- Has familiarity with deploying applications on Kubernetes
- Knowledge of supply chain management principles and challenges.
- A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred
This role will support FinTech Product Team with various ongoing change initiatives and day-to-day product operations such as
a) Testing and data analysis of platform and financial accounting datab) Campaign management for hosts and guests
c) CS Ticket deep dives informing product functionality for example.
Requirements:
- Experienced in core financial accounting concepts under U.S. GAAP accounting principles
- Ability to analyze and interpret large and complex datasets
- Strong attention to detail and ability to document test procedures and outcomes
- Demonstrated ability to work both independently and collaboratively
- Ability to write and execute SQL queries, Python Scripts
- Skilled in Spreadsheet application (e.g. Microsoft Excel, Google Sheets) for data analysis functionality - Pivot tables, VLOOKUP, formulas, etc.
- Experience in Quote-to-Cash business processes is a plus
- Experience and familiarity with Oracle Financial Applications is a plus - particularly Oracle Financials General Ledger, Subledgers, Financial Accounting Hub.
- Familiarity with Big Data Systems (Presto etc.) is a plus
Our Work Culture
We constantly strive and take pride in building a productive, diverse and stress-free work environment for our employees. We take keen interest in ensuring maximum work life balance for our employees. To achieve this, we offer benefits like –
Additional Performance based perks, Insurance Benefits, Generous Leaves and Vacations, Informal Office Outings and many more.
To ensure holistic development of the employees we also conduct workshops like personality development, stress handling, leadership and confidence building etc.
If you think your values align with the vision of the company, kindly proceed with filling out the Job Application form and we will be happy to interact with you.
Good Luck!
Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.
About us
Blitz is into Instant Logistics in Southeast Asia. Blitz was founded in the year 2021. It is in the business of delivering orders using EV bikes. Blitz not only delivers instant orders through EV Bikes, but it also finances the EV bikes to the drivers on lease and generates another source of revenue from the leasing as well apart from delivery charges. Blitz is revolutionizing instant coordination with the help of advanced technology-based solutions. It is a product-driven company and uses modern technologies to build products that solve problems in EV-based Logistics. Blitz is utilizing data sources coming from the EV bikes through IOT and smart engines to make technology-driven decisions to create a delightful experience for consumers
About the Role
We are seeking an experienced Data Engineer to join our dynamic team. The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support our data-driven initiatives. The ideal candidate will have a strong background in software engineering, database management, and data architecture, with a passion for building robust and efficient data systems
What you will do
- Design, build, and maintain scalable data pipelines and infrastructure to ingest, process, and analyze large volumes of structured and unstructured data.
- Collaborate with cross-functional teams to understand data requirements and develop solutions to meet business needs.
- Optimise data processing and storage solutions for performance, reliability, and cost-effectiveness.
- Implement data quality and validation processes to ensure accuracy and consistency of data.
- Monitor and troubleshoot data pipelines to identify and resolve issues in time.
- Stay updated on emerging technologies and best practices in data engineering and recommend innovations to enhance our data infrastructure.
- Document data pipelines, workflows, and infrastructure to facilitate knowledge sharing and ensure maintainability.
- Create Data Dashboards from the datasets to visualize different data requirements
What we need
- Bachelor's degree or higher in Computer Science, Engineering, or a related field.
- Proven experience as a Data Engineer or similar role, with expertise in building and maintaining data pipelines and infrastructure.
- Proficiency in programming languages such as Python, Java, or Scala.
- Strong knowledge of database systems (e.g., SQL, NoSQL, BigQuery) and data warehousing concepts.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
- Familiarity with data processing frameworks and tools (e.g., Apache, Spark, Hadoop, Kafka).
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Preferred Qualifications
- Advanced degree in Computer Science, Engineering, or related field.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes).
- Knowledge of machine learning and data analytics concepts.
- Experience with DevOps practices and tools.
- Certifications in relevant technologies (e.g., AWS Certified Big Data Specialty, Google Professional Data Engineer).
Please refer to the Company’s website - https://rideblitz.com/
Key Responsibilities:
• Design, develop, support, and maintain automated business intelligence products using
Tableau.
• Good understanding of data warehousing and data management concepts including OLTP,
OLAP, Dimensional Modelling, star/snowflake schemas
• Rapidly design, develop, and implement reporting applications that embed KPI metrics and
actionable insights into the operational, tactical, and strategic activities of key business
functions.
• Identify business requirements, design processes leveraging/adapting the business logic, and
regularly communicate with business stakeholders to ensure delivery meets business needs.
• Design and code business intelligence projects using Tableau, ensuring best practices for data
visualization and implementation.
• Develop and maintain dashboards and data sources that meet and exceed customer
requirements.
• Utilize Python for data manipulation, automation, and integration tasks to support Tableau
development.
• Write and optimize SQL queries for data extraction, transformation, and loading processes.
• Partner with business information architects to understand the business use cases
supporting and fulfilling business and data strategy.
• Collaborate with Product Owners and cross-functional teams in an agile environment.
• Provide expertise and best practices for data visualization and Tableau implementations.
• Work alongside solution architects in RFI/RFP response solution design, customer
presentations, demonstrations, POCs, etc., for growth.
• Understanding of Project life cycle and quality processes.
Qualifications:
• 5+ years of experience in Tableau development; Tableau certification is highly preferred.
• Proficiency in Python for data manipulation, automation, and integration tasks.
• Strong understanding and experience with SQL for database management and query
optimization.
• Ability to independently learn new technologies and show initiative.
• Demonstrated ability to work independently with minimal direction.
• Desire to stay current with industry technologies and standards.
• Strong presentation skills – ability to simplify complex situations and ideas into compelling
and effective written and oral presentations.
• Quick learner – ability to understand and rapidly comprehend new areas, both functional
and technical, and apply detailed and critical thinking to customer solutions.
Overall Experience: 6+ years
Relevant: 4+ years
Location: Initial 10 days Hyderabad after that Remote
*ONLY IMMEDIATE JOINERS*
Role Description
This is a full-time remote role for a Data Engineer. The Data Engineer will be responsible for daily tasks such as data engineering, data modeling, extract transform load (ETL), data warehousing, and data analytics. Collaboration and communication with cross-functional teams will be required to ensure successful project outcomes.
Qualifications
- SQL, Python/Scala, Spark, Hadoop, Hive, HDFS
- Data Engineering, Data Modeling, and Extract Transform Load (ETL) skills
- Data Warehousing and Data Analytics skills
- Experience in working with large datasets and data pipelines
- Proficiency in programming languages such as Python or SQL
- Knowledge of data integration and data processing tools
- Familiarity with cloud platforms and big data technologies
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Ability to work independently and remotely
- Bachelor's degree in Computer Science, Data Science, or a related field
Experience: Software intern in fullstack.
About: Fletch is a U.S. based technology firm building an open insurance ecosystem
connecting consumer apps to insurance providers. The team has previously built global
payment infrastructure at scale (processing $billions in transactions) and founded enterprise
technology companies with solutions in commercial use by over 100 corporates.
Fletch has seen rapid traction (existing partnerships with leading U.S. insurance providers)
and is backed by top global fintech VCs and angels. The team is now looking to scale
further hiring & expansion.
Location: Remote
Responsibilities:
-Developing new user-facing features using ReactJS, JS, Python, Node.
-Maintenance and enhancements of existing insurance applications
Skills:
-Knowledge of ReactJS and its core principles like states, hooks, HoC etc
-Proficiency in JavaScript, including DOM manipulation and the JavaScript object model
-Proficiency in Python programming
-Understanding of RESTful APIs
We are a software development company. We have primarily worked with enterprises and startups who often wants to build a product from scratch. Our mission is to build software with solid foundations, addressing the primary concerns of startup founders when working with agencies. We believe in prioritizing maintainability, simplicity, and the Open-Closed Principle for long-term value.
We are seeking a passionate and experienced Senior Software Engineer to join our team and play a key role in building software with solid engineering foundations. You will be involved in all stages of the development process, from identifying business entities and defining their behaviors, to implementing clean and maintainable code using object-oriented programming principles.
Responsibilities:
1. Harness your React/Python/TypeScript mastery to craft high-performing applications that delight our global audience.
2. Champion automated tests like a knight in shining armor. If your code was a ship, your tests would be the lighthouse guiding it to safety.
3. Employ your object-oriented programming prowess to sculpt code that's as efficient as it is elegant.
4. Debug with the precision of a Swiss watchmaker. We believe the devil is in the details, and your keen eye will keep those pesky bugs at bay.
5. Push your changes to production and proactively monitor product to ensure value is delivered.
6. Help your peers by reviewing their PRs and design documents.
7. Help the company by recruiting amazing team members, like yourself.
About You:
1. You write Javascript using Typescript and use Object Oriented Programming paradigms to build software.
2. You have shipped products to production using Python, React and other relevant technologies.
3. You don't ship code without writing automated test coverage.
4. You practice "broken window theory" during software development on day to day basis.
5. You are good at communication and bring clarity to communication.
6. B.Tech in CS/EE/ECE is a plus.
Perks and Benefits:
1. Work remotely, anywhere in the world! Ditch the commute, embrace flexibility.
2. Comprehensive health insurance - spouse, kids, parents (pre-conditions!), 24/7 remote doctors.
3. Generous PTO (12 paid, 6 sick) + national/regional holidays + paid parental leave.
4. Annual off-sites - fully funded team trips, work and fun combined!
5. Drive initiatives from scratch, solve real challenges, maximize your career value.
6. Supportive, results-oriented team. Learn, grow, be your best.
at TensorIoT Software Services Private Limited, India
About TensorIoT
TensorIoT is an AWS Advanced Consulting Partner. We help companies realize the value and efficiency of the AWS ecosystem. From building PoCs and MVPs to production-ready applications, we are tackling. complex business problems every day and developing solutions to drive customer success.
TensorIoT's founders helped build world-class IoT and AI platforms at AWS and Google and are now creating solutions to simplify the way enterprises incorporate edge devices and their data into their day-to-day operations. Our mission is to help connect devices and make them intelligent. Our founders firmly believe in the transformative potential of smarter devices to enhance our quality of life, and we're just getting started!
TensorIoT is proud to be an equal opportunity employer. This means that we are committed to diversity and inclusion and encourage people from all backgrounds to apply. We do not tolerate discrimination or harassment of any kind and make our hiring decisions based solely on qualifications, merit, and business needs at the time.
Job description
As a Mid-Level Python Developer, you will:
- Analyze user needs and develop software solutions.
- Work with project managers and product owners to meet specification needs.
- Recommend software upgrades to optimize operational efficiency.
- Deliver scalable and responsive software using TypeScript and Python.
- Collaborate with other developers to design and optimize code.
- Create flowcharts and user guides for new and existing programs.
- Document all programming tasks and procedures.
- Perform routine software maintenance.
- Deploy and maintain CI/CD pipelines.
- Develop and maintain data pipelines. This includes scaling the pipeline to accommodate anticipated volume and complexity.
- Collaborate with external clients and internal team members to meet product deadlines.
We're looking for someone who has:
- Experience with AWS Services(must)
- A bachelor’s degree in computer science, Engineering, or related fields
- 4 - 8 years of experience in software development, computer engineering, or other related fields
- Expert-level experience with Python and Node.JS
- Familiarity and comfort with REST APIs
- A deadline and detail-oriented mindframe
- Strong analytical and critical thinking skills
- Familiarity with DevOps tools and best practices
- Experience developing scalable data processing systems
Bonus points for someone with:
- Experience with IoT, ML, AI, or VR
- Amazon Web Services (AWS) certification(s) (preferred)
- Experience with microcomputers and microcontrollers
- Experience with the following DevOps services: AWS
- CodePipeline, CodeBuild or CodeCommit
- Experience with the following Data Engineering services: AWS Lake Formation, Glue, Redshift, EMR, or QuickSight.
About Springworks
At Springworks, we're on a mission to revolutionize the world of People Operations. With our innovative tools and products, we've already empowered over 500,000+ employees across 15,000+ organizations and 60+ countries in just a few short years.
But what sets us apart? Let us introduce you to our exciting product stack:
- SpringVerify: Our B2B background verification platform
- EngageWith: Spark vibrant cultures! Our recognition platform adds magic to work.
- Trivia: Fun remote team-building! Real-time games for strong bonds.
- SpringRole: Future-proof profiles! Blockchain-backed skill showcase.
- Albus: AI-powered workplace search and knowledge bot for companies
Join us at Springworks and be part of the remote work revolution. Get ready to work, play, and thrive in an environment that's anything but ordinary!
Role Overview
This role is for our Albus team. As a SDE 2 at Springworks, you will be responsible for designing, developing, and maintaining robust, scalable, and efficient web applications. You will work closely with cross-functional teams, turning innovative ideas into tangible, user-friendly products. The ideal candidate has a strong foundation in both front-end and back-end technologies, with a focus on Python, Node.js and ReactJS. Experience in Artificial Intelligence (AI), Machine Learning (ML) and Natural Language Processing (NLP) will be a significant advantage.
Responsibilities:
- Collaborate with product management and design teams to understand user requirements and translate them into technical specifications.
- Develop and maintain server-side logic using Node.js and Python.
- Design and implement user interfaces using React.js with focus on user experience.
- Build reusable and efficient code for future use.
- Implement security and data protection measures.
- Collaborate with other team members and stakeholders to ensure seamless integration of front-end and back-end components.
- Troubleshoot and debug complex issues, identifying root causes and implementing effective solutions.
- Stay up-to-date with the latest industry trends, technologies, and best practices to drive innovation within the team.
- Participate in architectural discussions and contribute to technical decision-making processes.
Goals (not limited to):
1 month into the job:
- Become familiar with the company's products, codebase, development tools, and coding standards. Aim to understand the existing architecture and code structure.
- Ensure that your development environment is fully set up and configured, and you are comfortable with the team's workflow and tools.
- Start contributing to the development process by taking on smaller tasks or bug fixes. Ensure that your code is well-documented and follows the team's coding conventions.
- Begin collaborating effectively with team members, attending daily stand-up meetings, and actively participating in discussions and code reviews.
- Understand the company's culture, values, and long-term vision to align your work with the company's goals.
3 months into the job:
- Be able to independently design, develop, and deliver small to medium-sized features or improvements to the product.
- Demonstrate consistent improvement in writing clean, efficient, and maintainable code. Receive positive feedback on code reviews.
- Continue to actively participate in team meetings, offer suggestions for process improvements, and collaborate effectively with colleagues.
- Start assisting junior team members or interns by sharing knowledge and providing mentorship.
- Seek feedback from colleagues and managers to identify areas for improvement and implement necessary changes.
6 months into the job:
- Take ownership of significant features or projects, from conception to deployment, demonstrating leadership in technical decision-making.
- Identify areas of the codebase that can benefit from refactoring or performance optimizations and work on these improvements.
- Propose and implement process improvements that enhance the team's efficiency and productivity.
- Continue to expand your technical skill set, potentially by exploring new technologies or frameworks that align with the company's needs.
- Strengthen your collaboration with other departments, such as product management or design, to ensure alignment between development and business objectives.
Requirements
- Minimum 4 years of experience working with Python along with machine learning frameworks and NLP technologies.
- Strong understanding of micro-services, messaging systems like SQS.
- Experience in designing and maintaining nosql databases (MongoDB)
- Familiarity with RESTful API design and implementation.
- Knowledge of version control systems (e.g., Git).
- Ability to work collaboratively in a team environment.
- Excellent problem-solving and communication skills, and a passion for learning. Essentially having a builder mindset is a plus.
- Proven ability to work on multiple projects simultaneously.
Nice to Have:
- Experience with containerization (e.g., Docker, Kubernetes).
- Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Knowledge of agile development methodologies.
- Contributions to open-source projects or a strong GitHub profile.
- Previous experience of working in a startup or fast paced environment.
- Strong understanding of front-end technologies such as HTML, CSS, and JavaScript.
About Company / Benefits
- Work from anywhere effortlessly with our remote setup perk: Rs. 50,000 for furniture and headphones, plus an annual addition of Rs. 5,000.
- We care about your well-being! Our health scheme covers not only your physical health but also your mental and social well-being. We've got you covered from head to toe!
- Say hello to endless possibilities with our learning and growth opportunities. We're here to fuel your curiosity and help you reach new heights.
- Take a breather and enjoy 30 annual paid leave days. It's time to relax, recharge, and make the most of your time off.
- Let's celebrate! We love company outings and celebrations that bring the team together for unforgettable moments and good vibes.
- We'll reimburse your workation trips, turning your travel dreams into reality.
- We've got your lifestyle covered. Treat yourself with our lifestyle allowance, which can be used for food, OTT, health/fitness, and more. Plus, we'll reimburse your internet expenses so you can stay connected wherever you go!
Join our remote team and experience the freedom and flexibility of asynchronous communication. Apply now!
Know more about Springworks:
- Life at Springworks: https://www.springworks.in/blog/category/life-at-springworks/
- Glassdoor Reviews: https://www.glassdoor.co.in/Overview/Working-at-Springworks-EI_IE1013270.11,22.htm
- More about Asynchronous Communication: https://www.springworks.in/blog/asynchronous-communication-remote-work/
at Optisol Business Solutions Pvt Ltd
Role Summary
As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.
Requirements:
· Around 4 years of working experience in data warehousing / BI system.
· Strong hands-on experience with Snowflake AND strong programming skills in Python
· Strong hands-on SQL skills
· Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.
· Knowledge on debt for cloud databases
· AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions
· Solid understanding of ETL processes, and data warehousing concepts
· Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework
· Experience with scrum methodologies
· Infrastructure build tools such as CFT / Terraform is a plus.
· Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.
· Strong team player with good communication skills.
Overview Optisol Business Solutions
OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.
Benefits, working with Optisol
· Great Learning & Development program
· Flextime, Work-at-Home & Hybrid Options
· A knowledgeable, high-achieving, experienced & fun team.
· Spot Awards & Recognition.
· The chance to be a part of next success story.
· A competitive base salary.
More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.
Job Description:
We are looking for an experienced SQL Developer to become a valued member of our dynamic team. In the role of SQL Developer, you will be tasked with creating top-notch database solutions, fine-tuning SQL databases, and providing support for our applications and systems. Your proficiency in SQL database design, development, and optimization will be instrumental in delivering efficient and dependable solutions to fulfil our business requirements.
Responsibilities:
● Create high-quality database solutions that align with the organization's requirements and standards.
● Design, manage, and fine-tune SQL databases, queries, and procedures to achieve optimal performance and scalability.
● Collaborate on the development of DBT pipelines to facilitate data transformation and modelling within our data warehouse.
● Evaluate and interpret ongoing business report requirements, gaining a clear understanding of the data necessary for insightful reporting.
● Conduct research to gather the essential data for constructing relevant and valuable reporting materials for stakeholders.
● Analyse existing SQL queries to identify areas for performance enhancements, implementing optimizations for greater efficiency.
● Propose new queries to extract meaningful insights from the data and enhance reporting capabilities.
● Develop procedures and scripts to ensure smooth data migration between systems, safeguarding data integrity.
● Deliver timely management reports on a scheduled basis to support decision-making processes.
● Investigate exceptions related to asset movements to maintain accurate and dependable data records.
Duties and Responsibilities:
● A minimum of 3 years of hands-on experience in SQL development and administration, showcasing a strong proficiency in database management.
● A solid grasp of SQL database design, development, and optimization techniques.
● A Bachelor's degree in Computer Science, Information Technology, or a related field.
● An excellent understanding of DBT (Data Build Tool) and its practical application in data transformation and modelling.
● Proficiency in either Python or JavaScript, as these are commonly utilized for data-related tasks.
● Familiarity with NoSQL databases and their practical application in specific scenarios.
● Demonstrated commitment and pride in your work, with a focus on contributing to the company's overall success.
● Strong problem-solving skills and the ability to collaborate effectively within a team environment.
● Excellent interpersonal and communication skills that facilitate productive collaboration with colleagues and stakeholders.
● Familiarity with Agile development methodologies and tools that promote efficient project management and teamwork.
at Intellectyx Data Science India Private Limited
Attaching the Job description for reference -
Role : Automation Tester
EXP : 5+ yrs
Mode : WFO
Location : Coimbatore
Role Description
As an Automation Cloud QA Engineer, you will be responsible for designing, implementing, and executing automated tests for cloud-based applications. You will work closely with the development and DevOps teams to ensure the quality and reliability of software releases in a cloud environment. The ideal candidate will have a strong background in test automation, cloud technologies, and a deep understanding of quality assurance best practices.
Automation QA Engineer Responsibilities:
Automated Testing:
· Design, develop, and maintain automated test scripts for cloud-based applications using industry-standard tools and frameworks.
· Implement end-to-end test automation to validate system functionality, performance, and scalability.
Cloud Testing:
· Perform testing on cloud platforms (e.g., AWS, Azure, Google Cloud) to ensure applications function seamlessly in a cloud environment.
· Collaborate with DevOps teams to ensure continuous integration and deployment pipelines are robust and reliable.
Test Strategy and Planning:
· Contribute to the development of test strategies, test plans, and test cases for cloud-based applications.
· Work with cross-functional teams to define and implement quality metrics and standards.
Defect Management:
· Identify, document, and track defects using established tools and processes.
· Collaborate with developers and product teams to ensure timely resolution of issues.
Performance Testing:
· Conduct performance testing to identify and address bottlenecks, ensuring optimal application performance in a cloud environment.
Collaboration
· Work closely with development teams to understand system architecture and functionality.
· Collaborate with cross-functional teams to promote a culture of quality throughout the development lifecycle.
Requirements And Skills:
· Bachelor's degree in Computer Science, Engineering, or a related field.
· Proven experience as a QA Engineer, with a focus on automation and cloud technologies.
· Strong programming skills in languages like Python, Java, or other scripting languages.
· Experience with cloud platforms such as AWS, Azure, or Google Cloud.
· Proficiency in using automation testing frameworks (e.g., Selenium, JUnit, TestNG), Selenium is preferred.
· Solid understanding of software development and testing methodologies.
· Excellent problem-solving and analytical skills.
· Strong communication and collaboration skills.
· Maintain documentation for test environment
Regards
Divya
Software Architect
Bangalore, India / Engineering/ Full-time
Job Overview:
As a Software Architect, you will play a crucial role in designing, developing, and maintaining robust and scalable backend solutions for our software applications. You will be responsible for making strategic technical decisions, and collaborating with cross-functional teams to ensure the successful delivery of high-quality software products with scalable backend infrastructure.
Responsibilities:
- System Architecture:
- Design and architect scalable, efficient, and maintainable backend systems.
- Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
- Technical Leadership:
- Provide technical leadership and guidance to the development team, ensuring best practices and coding standards are followed.
- Mentor and coach team members, fostering a culture of continuous learning and improvement.
- Backend Development:
- Lead the development of backend components, modules, and features primarily on Ruby on Rails tech stack and also be open to contribute in other tech stacks based on Java/Kotlin and Python.
- Implement and maintain APIs, data models, and database structures to support application functionality.
- Performance Optimization:
- Identify and address performance bottlenecks, ensuring optimal system response times and resource utilization.
- Implement caching strategies and other performance optimization techniques.
- Collaboration:
- Collaborate with frontend developers, product managers, and other stakeholders to integrate frontend and backend components seamlessly.
- Participate in code reviews to ensure code quality, adherence to standards, and knowledge sharing within the team.
- Security and Compliance:
- Implement and enforce security best practices to safeguard sensitive data.
- Stay updated on industry trends and emerging technologies to ensure compliance and security standards are met.
- Documentation:
- Create and maintain comprehensive technical documentation for the backend architecture, APIs, and development processes.
- Continuous Improvement:
- Proactively identify opportunities for process improvement and contribute to the evolution of development methodologies and practices.
Qualifications:
- Overall 10+ years of experience, with 2+ years as a Software Architect with a focus on backend development using Ruby on Rails.
- In-depth knowledge of Ruby on Rails framework, database design, and API development.
- Strong understanding of software architecture principles, design patterns, and best practices.
- Experience with performance optimization, scalability, and security considerations.
- Excellent communication and collaboration skills.
- Leadership experience with a demonstrated ability to mentor and guide development teams.
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience).
Bonus points:
- Familiarity with front-end technologies and frameworks(ReactJs).
- Experience with other programming languages.(Kotlin/Python)
- Experience with cloud platforms and microservices architecture.
- Knowledge of DevOps practices and tools.(AWS, Kubernetes, )
Join us in revolutionizing the way software solutions are developed, and contribute to building cutting-edge applications that make a positive impact on our users and the industry.
JOB POSITION- FULL TIME BACKEND ENGINEER (TRADING SYSTEMS)
Satsio is an African startup that is building a crypto exchange for both spot trading and perpetual futures trading. We have many exciting and innovative features and products in the pipeline. We are looking to add to our engineering team and we are recruiting for a full time backend engineer (trading systems).
How to apply:
After reading the job description, please complete the job application via the following link: https://forms.gle/wEdVVktX3iVLCqj59
Location
This is a fully remote position. We are accepting applications from worldwide candidates.
Remuneration
Salary range in USDT shown at the top of the advert, depending on skills, experience, and location, and subject to good performance, vesting shares of 1% of the business.
Requirements
· Proficiency in Python and expertise in working with websockets
· Proven experience in designing and implementing complex REST APIs
· Project experience with Flask and Django
· Proficient with Linux and experienced working with cloud servers
· Excellent communication skills
· Intellectually motivated and a quick learner
· Fluent in both spoken and written English
Preferences
· Proficient in C++ or another low latency language
· Experience building matching engines and trading systems
· Experience working in the crypto exchange industry
· Experience with blockchain nodes, creating blockchain wallet systems, cryptocurrency deposit and withdrawal systems and optimising network fees
· Familiar with the agile development process, Github flow, and modern software engineering practices
· Align working hours with the standard 9am-6pm UTC+1 schedule
Key Responsibilities
· Review and understand existing backend code base and make improvements where necessary
· Matching engine and improving its speed
· The system that updates user balances following user transactions and performs the necessary checks prior to permitting transactions
· Cloud server configurations and deployment and cost optimisation of resources
· Creating APIs with supporting documentation for users who trade algorithmically directly via APIs and not via the frontend
· Work on our p2p trading product and KYC process
· Building the perpetual futures trading product and integrating data from 3rd party APIs
· Performing unit tests, integrated tests, performance tests and end to end tests
· Working on various other backend tasks that the startup requires to be done
Startup culture
Our culture is one of constant innovation. Great teamwork, creativity, a passion for innovation, constantly trying to improve, being hard working, a self starter, a quick learner, taking initiative, a can do attitude, shipping product, loyalty, and having a positive mindset are all traits which we are seeking in our future colleague. We seek to create an environment whereby colleagues can see their long-term career to be with us and to flourish with us as we grow.
Hi,
We need a fullstack developer who can write quality code real fast as we are fast-paced startup
Roles and Responsibilities
- Backend development in Python/Flask
- Frontend development in React/Next
- Deployment using AWS
You will learn a lot on the job so we need someone who is willing to learn and put in the work
DocNexus is revolutionizing the global medical affairs & commercial ecosystem with search. We provide a next-generation data platform that simplifies searching through millions of insights, publications, clinical trials, payments, and social media data within seconds to identify healthcare professionals (HCPs), products, manufacturers, and healthcare systems. Leveraging AI-powered Knowledge Graphs, DocNexus assists life science organizations in finding the right key opinion leaders (KOL/DOLs) who play a crucial role in developing and bringing life-saving pharmaceutical products and medical devices to market. Backed by industry leaders such as Techstars, JP Morgan, Mass Challenge, and recognized as one of the Top 200 Most Innovative Startups by TechCrunch Disrupt, we are committed to transforming healthcare insights. We are seeking a skilled and passionate DevOps Engineer to join our dynamic team and contribute to the efficient development, deployment, and maintenance of our platform.
We are looking for a visionary Sr. Full Stack Engineering Lead who is passionate about building and leading our technology department. The ideal candidate will have a solid technical background and experience in leading a team to drive innovation and growth. As Engineering Lead, you will oversee the development and dissemination of technology for external customers, vendors, and other clients to help improve and increase business.
Leadership and Strategy:
- Lead the engineering team and make strategic decisions regarding the technology stack, project management, and resource allocation.
- Establish the company’s technical vision and lead all aspects of technological development.
Development:
- Develop and maintain the front-end and back-end of web applications.
- Ensure the performance, quality, and responsiveness of applications.
- Collaborate with a team to define, design, and ship new features.
Maintenance and Optimization:
- Maintain code integrity and organization.
- Identify and correct bottlenecks and fix bugs.
- Continually work on optimizing the performance of different applications.
Security: Ensure the security of the web applications by integrating security best practices.
- Regularly update the system to protect against vulnerabilities.
Innovation:
- Research and implement new technologies and frameworks that can improve the performance and user experience of the platform.
- Stay informed on emerging technologies and trends that can potentially impact the company's products and services.
Collaboration and Communication:
- Work closely with other departments to understand their needs and translate them into technical solutions.
- Communicate technology strategy to partners, management, investors, and employees.
Project Management:
- Oversee and support project planning, deadlines, and progress.
- Ensure that the technology standards and best practices are maintained across the organization.
Mentoring and Team Building:
- Foster a culture of innovation and excellence within the technology team.
- Mentor and guide the professional and technical development of team members.
Front-End Development:
- HTML/CSS: For structuring and styling the web pages.
- JavaScript/TypeScript: Core scripting language, along with frameworks like Angular, React, or Vue.js for dynamic and responsive user interfaces.
Back-End Development:
- Python: Using frameworks like Django or Flask for server-side logic.
- Node.js: JavaScript runtime environment for building scalable network applications.
- Ruby on Rails: A server-side web application framework written in Ruby.
Database Management:
- SQL Databases: MySQL, PostgreSQL for structured data storage.
- NoSQL Databases: MongoDB, Cassandra for unstructured data or specific use cases.
Server Management:
- Nginx or Apache: For server and reverse proxy functionalities.
- Docker: For containerizing applications and ensuring consistency across multiple development and release cycles.
- Kubernetes: For automating deployment, scaling, and operations of application containers.
DevOps and Continuous Integration/Continuous Deployment (CI/CD):
- Git: For version control.
- Jenkins, Travis CI, or CircleCI: For continuous integration and deployment.
- Ansible, Chef, or Puppet: For configuration management.
Cloud Services:
- AWS: For various cloud services like computing, database storage, content delivery, etc.
- Serverless Frameworks: Such as AWS Lambda or Google Cloud Functions for running code without provisioning or managing servers.
Security:
- OAuth, JWT: For secure authentication mechanisms.
- SSL/TLS: For secure data transmission.
- Various Encryption Techniques: To safeguard sensitive data.
Performance Monitoring and Testing:
- Selenium, Jest, or Mocha: For automated testing.
- New Relic or Datadog: For performance monitoring.
Data Science and Analytics:
- Python Libraries: NumPy, Pandas, or SciPy for data manipulation and analysis.
- Machine Learning Frameworks: TensorFlow, PyTorch for implementing machine learning models.
Other Technologies:
- GraphQL: For querying and manipulating data efficiently.
- WebSockets: For real-time bi-directional communication between web clients and servers.
at Simform
Company Description:
Simform is an innovative product engineering company that assists organizations of any size to identify and solve key business challenges with DevOps, cloud-native development, and quality engineering services. Founded in 2010, our agile remote teams of engineers immerse themselves in your project, maintain your company culture, and work in line with your strategic goals. At Simform, we are dedicated to developing competitiveness and agility for companies using software.
Role Description:
This is a full-time hybrid role for a Sr. Python/Cypress Automation Engineer located in Ahmedabad, India, with flexibility for some remote work. The Sr. Python/Cypress Automation Engineer will be responsible for developing, testing, and maintaining a scalable automation framework for web and mobile applications. The Sr. Python/Cypress Automation Engineer will also work closely with cross-functional teams to identify and resolve issues, and collaborate with other QA engineers to ensure high-quality solutions.
Qualifications:
- Bachelor's degree in Computer Science or a related field with 4+ years of relevant work experience
- Strong proficiency in Python with excellent understanding of: Python unit testing frameworks (PyTest, Unittest), web-scraping (bs4, lxml), and good-to-have modules (requests, lxml, pandas, numpy, etc.)
- Good Knowledge in testing frameworks such as Cypress, Selenium, Appium, and TestNG
- Experience in End-to-End testing, Integration testing, Regression testing, and API testing
- Demonstrated experience with test reporting tools such as Allure, ExtentReports, and Cucumber
- Excellent understanding of CI/CD pipeline, build automation, and deployment pipelines
- Experience in software development methodologies such as Agile, Scrum and Kanban
- Expertise in SQL and other databases like DynamoDB, MongoDB, etc.
- An analytical mind with keen problem-solving skills and attention to detail
- Excellent verbal and written communication skills in English
Why Simform?-
- Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture.
- Flexible work timing, 18+12 leaves, Leaves for life events, Flexibility
- Free health insurance
- Office facility with large fully-equipped game zone, in-office kitchen with affordable lunch service, and free snacks.
- Sponsorship for certifications/events, library service, and the latest cutting-edge tech learning.
- Develop, maintain, and enhance robust backend systems using Python3 and frameworks like Django (mandatory), Flask, and FastAPI (good to have).
- Design, implement, and maintain highly efficient and automated continuous integration and continuous deployment (CI/CD) processes using Jenkins and configuration management with Ansible.
- Elevate our testing culture by architecting and implementing innovative testing strategies, focusing on both unit and integration testing to ensure exceptional code quality and extensive coverage.
- Follow peer-to-peer code reviews and cross-team collaboration to build scalable and reliable solutions.
- Proficiently work with at least one cloud technology, preferably AWS, to deploy and manage applications in the cloud environment.
- Possess knowledge of Nginx, load balancing, and scalability to optimize system performance and reliability.
- Good experience working in distributed micro-service architecture and driving them with crucial requirements like request tracing, debugging critical issues, logging, monitoring, and alerting.
- Work in containerized environments using Docker and Docker Compose, and have experience with AWS ECS for container orchestration.
- Prioritize maintainability and reliability in developing and maintaining software systems, pushing the boundaries of what's possible in our product ecosystem.
- Collaborate effectively within a team, taking initiatives and driving projects forward with minimal micro-management.
- Pave the way for maintainable and reliable codebases by introducing novel approaches and best practices in software development.
Requirements
- Bachelor’s/Master’s degree in computer science.
- Strong teamwork and communication skills, with a proactive approach to project management and task ownership.
- Proven track record as an innovative Python developer with a focus on product engineering.
- Visionary mindset to work in an agile environment, participating in sprint planning, stand-ups, and retrospectives.
Benefits
100% Remote
Insurance
at Blue Hex Software Private Limited
In this position, you will play a pivotal role in collaborating with our CFO, CTO, and our dedicated technical team to craft and develop cutting-edge AI-based products.
Role and Responsibilities:
- Develop and maintain Python-based software applications.
- Design and work with databases using SQL.
- Use Django, Streamlit, and front-end frameworks like Node.js and Svelte for web development.
- Create interactive data visualizations with charting libraries.
- Collaborate on scalable architecture and experimental tech. - Work with AI/ML frameworks and data analytics.
- Utilize Git, DevOps basics, and JIRA for project management. Skills and Qualifications:
- Strong Python programming
skills.
- Proficiency in OOP and SQL.
- Experience with Django, Streamlit, Node.js, and Svelte.
- Familiarity with charting libraries.
- Knowledge of AI/ML frameworks.
- Basic Git and DevOps understanding.
- Effective communication and teamwork.
Company details: We are a team of Enterprise Transformation Experts who deliver radically transforming products, solutions, and consultation services to businesses of any size. Our exceptional team of diverse and passionate individuals is united by a common mission to democratize the transformative power of AI.
Website: Blue Hex Software – AI | CRM | CXM & DATA ANALYTICS
Position: Founding Engineer (SDE)
Location: Remote
Do you have a passion for developing cutting-edge software? Do you thrive in the innovative atmosphere of a startup? If coding is your art, and you have the prowess to turn problems into sophisticated solutions, then we're looking for you!
We are an early-stage startup working on re-defining how businesses in India approach accounting.
The Role:
- Dive deep into coding challenges, transforming ideas into groundbreaking software solutions.
- Play a pivotal role in our startup's journey, working closely with a dynamic team to develop the next big thing.
- See the immediate impact of your work in a rapidly-growing company.
What We're Looking For:
- Experience: Minimum 2 years in software development.
- Problem Solvers: Ability to take on a challenge, break it down, and deliver an end-to-end solution.
- Team Player: Collaborative mindset to work seamlessly with cross-functional teams.
- Fast Learner: Adaptability in a rapidly changing environment
Nice to Have:
- Victories in hackathons or coding challenges, showcasing your ability to innovate under pressure.
Why Join Us?
- Be at the forefront of our exciting journey, where your contributions directly shape our success.
- Collaborative and open culture – your insights and expertise won't just be appreciated; they'll be celebrated!
- Opportunities for rapid growth, professional development, and unique challenges.
- Competitive, class-leading compensation
CTC: 25-40LPA based on experience
Bachelor’s degree (minimum) in Computer Science or Engineering.
5-7 years of experience working as a senior–level Software Engineer
Excellent programming and debugging skills in C/C++and Python
Experience developing on Windows and Linux systems
Experience in automation of manual tasks
Although not required, the following are a plus:
Experience working with Build framework (Makefile, CMake, Scons), Batch/Shell scripting
Experience with Jenkins and other CI/CD tools
Knowledge of RESTful web services and docker