
Experience with various stream processing and batch processing tools (Kafka,
Spark etc). Programming with Python.
● Experience with relational and non-relational databases.
● Fairly good understanding of AWS (or any equivalent).
Key Responsibilities
● Design new systems and redesign existing systems to work at scale.
● Care about things like fault tolerance, durability, backups and recovery,
performance, maintainability, code simplicity etc.
● Lead a team of software engineers and help create an environment of ownership
and learning.
● Introduce best practices of software development and ensure their adoption
across the team.
● Help set and maintain coding standards for the team.

Similar jobs
Open with Freelancers also
- Write clear, concise, and complete sentences
- Focus on key responsibilities critical to the position
- Base the content on the responsibilities and duties of the position
- Include expectations of a fully proficient incumbent meeting all position requirements
- When indicating the percentage of time spent on each duty, consider what is performed over a given period of time
- The percentages must total 100% and should not be smaller than 5% or greater than 50%
- Use explanatory phrases telling why, how, where, or how often to add meaning and clarity
We are looking for a great JavaScript Enthu Interns who is known with React.js. Your primary focus will be on developing user interface components and implementing them following well-known React.js workflows (such as Flux or Redux). You will ensure that these components and the overall application are robust and easy to maintain. You will coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem-solving, sophisticated design, and quality products is important.
Responsibilities
• Developing new user-facing features using React.js, and Node.js
• Building reusable components and front-end libraries for future use
• Translating designs and wireframes into high-quality code
• Optimizing components for maximum performance across a vast array of web-capable devices and browsers
Skills
• Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model
• Thorough understanding of React.js and its core principles
• Experience with popular React.js workflows (such as Flux or Redux)
• Knowledge of isomorphic React is a plus
• Familiarity with RESTful APIs
• Knowledge of modern authorization mechanisms, such as JSON Web Token
• Experience with common front-end development tools such as Babel, Webpack, NPM, etc.
• Ability to understand business requirements and translate them into technical requirements
• A knack for benchmarking and optimization
• Familiarity with code versioning tools such as Git, SVN, and Mercurial
Skills:- Javascript, Redux/Flux, React.js and NodeJS
Fixed Stipend between - 5k - 7k
POST Internship we offer PPO, and continue with full-time.
Job Description: DevOps Engineer
About Hyno:
Hyno Technologies is a unique blend of top-notch designers and world-class developers for new-age product development. Within the last 2 years we have collaborated with 32 young startups from India, US and EU to to find the optimum solution to their complex business problems. We have helped them to address the issues of scalability and optimisation through the use of technology and minimal cost. To us any new challenge is an opportunity.
As part of Hyno’s expansion plans,Hyno, in partnership with Sparity, is seeking an experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in enhancing our software development processes, optimising system infrastructure, and ensuring the seamless deployment of applications. If you are passionate about leveraging cutting-edge technologies to drive efficiency, reliability, and scalability in software development, this is the perfect opportunity for you.
Position: DevOps Engineer
Experience: 5-7 years
Responsibilities:
- Collaborate with cross-functional teams to design, develop, and implement CI/CD pipelines for automated application deployment, testing, and monitoring.
- Manage and maintain cloud infrastructure using tools like AWS, Azure, or GCP, ensuring scalability, security, and high availability.
- Develop and implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate the provisioning and management of resources.
- Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardised best practices.
- Work closely with development teams to provide support and guidance in building applications with a focus on scalability, reliability, and security.
- Perform regular security assessments and implement best practices for securing the entire development and deployment pipeline.
- Troubleshoot and resolve issues related to infrastructure, deployment, and application performance in a timely manner.
- Follow regulatory and ISO 13485 requirements.
- Stay updated with industry trends and emerging technologies in the DevOps and cloud space, and proactively suggest improvements to current processes.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Minimum of 5 years of hands-on experience in DevOps, system administration, or related roles.
- Solid understanding of containerization technologies (Docker, Kubernetes) and orchestration tools
- Strong experience with cloud platforms such as AWS, Azure, or GCP, including services like ECS, S3, RDS, and more.
- Proficiency in at least one programming/scripting language such as Python, Bash, or PowerShell..
- Demonstrated experience in building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Familiarity with configuration management tools like Ansible, Puppet, or Chef.
- Experience with container (Docker, ECS, EKS), serverless (Lambda), and Virtual Machine (VMware, KVM) architectures.
- Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or Pulumi.
- Strong knowledge of monitoring and logging tools such as Prometheus, ELK stack, or Splunk.
- Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.
- Strong communication skills and the ability to work independently as well as in a team.
Nice to Have:
- Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, Certified Kubernetes Administrator (CKA), etc.
- Experience with microservices architecture and serverless computing.
Soft Skills:
- Excellent written and verbal communication skills.
- Ability to manage conflict effectively.
- Ability to adapt and be productive in a dynamic environment.
- Strong communication and collaboration skills supporting multiple stakeholders and business operations.
- Self-starter, self-managed, and a team player.
Join us in shaping the future of DevOps at Hyno in collaboration with Sparity. If you are a highly motivated and skilled DevOps Engineer, eager to make an impact in a remote setting, we'd love to hear from you.
About us:
http://www.productsup.com">Productsup provides an award-winning platform for feed management, product content syndication, marketplace integration, and seller onboarding. The platform empowers businesses to take complete control of their product data and break through the digital walls that hinder growth. With Productsup, businesses can syndicate content to digital marketing, shopping, or business channels, including Google, Amazon, Facebook, Walmart, and more.
We are headquartered in Berlin, Germany, have entities in 5 different countries, and are in the process of setting up our Indian entity. Productsup is trusted by more than 800 businesses worldwide, including 5 Fortune 20 companies and market leaders like IKEA, Walmart, Superdry, and Rakuten.
We are proud that we have recently raised an additional 20 Million dollars in funding and that our employees enjoy working at Productsup, which is also shown by our Glassdoor and Kununu ratings.
About the role:
We’re looking for a hands-on Engineering Lead/Managing Architect for our API/Backend Development team. In this role, you’ll lead the effort to develop channels and APIs for our marketplace channels and scalable data pipelines, which process over 5 billion products a day.
In your first 6 months and beyond, you will:
- Learn the ins and outs of our platform, our codebase and our engineering processes
- Help build features for software that works with large data sets
- Play a key cross-functional role within our team of highly talented Javascript and PHP developers
- Develop solutions for high availability software
- Receive customer requests from the Product Team and develop solutions in the form of new features or products
- Debug and maintain APIs
- Work closely within the Platform team and with the API team to develop internal APIs and client-facing interfaces.
What you bring to the team:
- 2+ years of demonstrated Tech leadership experience.
- 7+ years of experience in PHP (good knowledge of Slim, Silex, or Symfony frameworks) and MySQL
- Some knowledge of integration architecture patterns such as scalable microservices, API gateways, API scalability
- Deep knowledge in APIs, web services, command-line programs, TDD, and SOLID design principles
- Good understanding of B2B SaaS products and experience dealing with big data
- Knowledge of developing and securing APIs
- Extended database experience in Elasticsearch, MongoDB or Couchbase
- Outstanding communication skills in English
- Educational background in IT or Computer Science is a plus
Benefits & Perks:
- Attractive Salary and benefits
- MacBook (latest version) for our new joiners
- A unique and thorough onboarding program where you’ll learn the ins and outs of our company and product
- Employee referral bonuses
- High level of personal responsibility and impact
Explore more here
https://www.linkedin.com/company/productsup/">LinkedIn
https://www.youtube.com/watch?v=olqMdYTBjHc">Life at Productsup
Do you have what it takes? We'd love to hear from you!
We know CVs don't always tell the whole story, so in addition to submitting your CV feel free to let us know why you're interested in this role in a short cover letter (:100 words).
Description of work at Aerotime:
Job Description:
We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
Responsibilities:
- Develop, test, and implement data solutions based on functional / non-functional business requirements.
- You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
- Build Data Models to store the data in a most optimized manner
- Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Implementing the ETL process and optimal data pipeline architecture
- Monitoring performance and advising any necessary infrastructure changes.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Proactively identify potential production issues and recommend and implement solutions
- Must be able to write quality code and build secure, highly available systems.
- Create design documents that describe the functionality, capacity, architecture, and process.
- Review peer-codes and pipelines before deploying to Production for optimization issues and code standards
Skill Sets:
- Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Proficient understanding of distributed computing principles
- Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
- Implemented complex projects dealing with the considerable data size (PB).
- Optimization techniques (performance, scalability, monitoring, etc.)
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Creation of DAGs for data engineering
- Expert at Python /Scala programming, especially for data engineering/ ETL purposes
Job Description
-Develop all user-facing products in NodeJs & React.Js (SSR)
-Build reusable components and front-end libraries for future use.
-Translate designs and wireframes into high quality code
-Focus on code maintainability and performance of application.
-Provide technical advice and assists in solving programming problems.
Requirements
• 5+ years of experience in frontend development primarily using React .JS
• Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model
• Good foundation in design and a knack for designing interactions and elegant interfaces)
• Thorough understanding of NodeJs & React.js(SSR) and its core principles
• Familiarity with RESTful APIs
• Familiarity with modern front-end build pipelines and tools
• Familiarity with code versioning (version control) tools, such as Git, SVN, and Mercurial
• Thorough understanding of React.js and its core principles
• Good to have experience with popular React.js workflows
• Ability to understand business requirements and translate them into technical requirement
• Preference will be given to those who have worked on libraries such as Bootstrap, Material js.
• Knowledge ofGraphQL and CSS preprocessor like SASS
• Proficient in industry standard best practices such as Design Patterns, Coding Standards,
Coding modularity, Prototypes etc.
• Exposure to Cloud
• Versatile in choosing appropriate tools and frameworks for the Core and advance java development procedures.
• Good in Spring Boot and latest java methodologies to suggest best practices and proven solutions to the business.
• Expert in service oriented solutions and micro services architecture (REST). Should have been a part of monolithic to micro services rearchitecture.
• Exposure towards No-Sql databases such as Cassandra or Mongo DB.
As a Senior Tech Lead:
You will be part of a thought leadership team that will design and develop the leading cyber security solution that protects digital assets of corporations such as Apple & the US Federal Govt. This solution used by global Fortune 100 corporations will be massively scalable to secure their Global networks
You will bring to the table:
Domain: Networking and Network Security
Primary Skills: Java, Spring & Hibernate
Secondary Skills: Any one of Python / Java Script / Angular JS / Shell / ANTLR / Groovy
Expertise
- Excellent skills & experience in Java, Spring & Hibernate
- Minimum 2 years of Experience in Networking and Network Security domain
- Any Scripting language - Python / Java Script / Angular JS / Shell / ANTLR / Groovy
- Strong object-oriented design skills, data structures, algorithms, and design patterns.
- Tools Pivotal / GitHub / Jenkins
- Good to have Database design and management experience.
What you will do…
- You will be hands on, writing high quality code and ensuring on-time delivery.
- Provide guidance on software design, architecture, and interface choices.
- Design highly scalable, reliable, secure and fault tolerant systems with minimal guidance.
- Mentor engineers on design, coding, and troubleshooting.
- Analyse requirements, problems and solve them with the best solution.
- Create platforms, reusable libraries, and utilities wherever applicable.
- Work in cross-functional team, collaborating with peers during entire SDLC.
- Work as part of a team to solve complex technical problems.
- Support customer queries, escalations, to keep high customer satisfaction.
About Benison
Benison Tech is a niche technology company that has been appointed by Intel, Broadcom, CISCO, Checkpoint, and Marvell to collaboratively spearhead the next generation Network Security, 5G and Wireless technologies. We help our mutual customers get to market faster by applying our core technical brilliance in solving complex engineering problems.
We work with the world leading technology companies in the latest bleeding edge technologies from 5G enablement to real-time ML based network security systems.
Our interview process isn’t easy, but necessary to ensure that we are a fit for each other. You will be working in a dynamic fast paced environment on cutting edge technologies, so roll up your sleeves and get ready for the challenge. We need people who are drawn to technology challenges rather than work in a plush corporate role.
You are a fit for Benison if
- You want to work in the technologies of the future… Network Security, Cloud technologies, 5G and WiFi6.
- You have a deep-rooted desire to learn new technologies.
- You are driven by the passion of solving complex problems.
- You want to work with some of the best minds in the industry










