About Bonzai Digital Pvt. Ltd.
Similar jobs
About Us:
At Product Fusion, we are dedicated to building innovative and scalable software solutions. Our team is passionate about leveraging cutting-edge technologies to drive product excellence and create impactful digital experiences. We invite you to join our dynamic team and contribute to our mission of technological innovation.
Job Description:
We are seeking a talented and motivated Full-Stack Developer to join our team. The ideal candidate will have a strong background in both front-end and back-end development, with proficiency in modern web technologies and frameworks. You will work closely with our development team to design, develop, and deploy scalable web applications.
Requirements:
- Proven experience as a Full-Stack Developer or similar role
- Strong proficiency in React JS or Next JS
- Solid understanding of Python and frameworks like Django or Fast API or Flask
- Proficiency in Tailwind CSS for front-end development
- Experience with PostgreSQL or MySQL databases
- Familiarity with Kubernetes for container orchestration
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork abilities
What We Offer:
- Competitive salary and benefits
- Flexible working hours and remote work options
- Opportunity to work with cutting-edge technologies
- Collaborative and innovative work environment
- Professional development and growth opportunities
Skill set Requirement
RoR
Any front end frameworks (good to have)
Docker, Aws and Kubernetes
Sidekiq
Note : This position is for Pune location. Please only apply if you are in pune or willing to relocate to Pune.
We are a product company with headquarter in New York and development centre in Pune.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Roles & Responsibilities:
Work on implementation of real-time and batch data pipelines for disparate data sources.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
- Identify improvement areas in the current data system and implement optimizations.
- Work on specific areas of data governance including metadata management and data quality management.
- Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
- Develop Proof-of-Concepts to validate new technology solutions or advancements.
- Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
- Work on building intelligent systems using various AI/ML algorithms.
Desired Experience/Skill:
- Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
- Experience with private and public cloud architectures with pros/cons.
- Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
- Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
- Knowledge of Kafka, Redis is preferred
- Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
About us:
Arista Networks was founded to pioneer and deliver software driven cloud networking solutions for large datacenter storage and computing environments. Arista's award-winning platforms, ranging in Ethernet speeds from 10 to 400 gigabits per second, redefine scalability, agility and resilience. Arista has shipped more than 20 million cloud networking ports worldwide with CloudVision and EOS, an advanced network operating system. Committed to open standards, Arista is a founding member of the 25/50GbE consortium. Arista Networks products are available worldwide directly and through partners.
About the job
Arista Networks is looking for world-class software engineers to join our Extensible Operating System (EOS) software development team.As a core member of the EOS team, you will be part of a fast-paced,high caliber team-building features to run the world's largest data center networks.Your software will be a key component of Arista's EOS, Arista's unique, Linux-based network operating system that runs on all of Arista's data center networking products.
The EOS team is responsible for all aspects of the development and delivery of software meant to run on the various Arista switches.You will work with your fellow engineers and members of the marketing team to gather and understand the functional and technical requirements for upcoming projects.You will help write functional specifications, design specifications, test plans, and the code to bring all of these to life.You will also work with customers to triage and fix problems in their networks. Internally, you will develop automated tests for your software, monitor the execution of those tests, and triage and fix problems found by your tests.At Arista, you will own your projects from definition to deployment, and you will be responsible for the quality of everything you deliver.
This role demands strong and broad software engineering fundamentals, and a good understanding of networking including capabilities like L2, L3, and fundamentals of commercial switching HW.Your role will not be limited to a single aspect of EOS at Arista, but cover all aspects of EOS.
Responsibilities:
- Write functional specifications and design specifications for features related to forwarding traffic on the internet and cloud data centers.
- Independently implement solutions to small-sized problems in our EOS software, using the C, C++, and python programming languages.
- Write test plan specifications for small-sized features in EOS, and implement automated test programs to execute the cases described in the test plan.
- Debug problems found by our automated test programs and fix the problems.
- Work on a team implementing, testing, and debugging solutions to larger routing protocol problems.
- Work with Customer Support Engineers to analyze problems in customer networks and provide fixes for those problems when needed in the form of new software releases or software patches.
- Work with the System Test Engineers to analyze problems found in their tests and provide fixes for those problems.
- Mentor new and junior engineers to bring them up to speed in Arista’s software development environment.
- Review and contribute to the specifications and implementations written by other team members.
- Help to create a schedule for the implementation and debugging tasks, update that schedule weekly, and report it to the project lead.
Qualifications:
- BS Computer Science/Electrical Engineering/Computer Engineering 3-10 years experience, or MS Computer Science/Electrical Engineering/Computer Engineering + 5 years experience, Ph.D. in Computer Science/Electrical Engineering/Computer Engineering, or equivalent work experience.
- Knowledge of C, C++, and/or python.
- Knowledge of UNIX or Linux.
- Understanding of L2/L3 networking including at least one of the following areas is desirable:
- IP routing protocols, such as RIP, OSPF, BGP, IS-IS, or PIM.
- Layer 2 features such as 802.1d bridging, the 802.1d Spanning Tree Protocol, the 802.1ax Link Aggregation Control Protocol, the 802.1AB Link Layer Discovery Protocol, or RFC 1812 IP routing.
- Ability to utilize, test, and debug packet forwarding engine and a hardware component’s vendor provided software libraries in your solutions.
- Infrastructure functions related to distributed systems such as messaging, signalling, databases, and command line interface techniques.
- Hands on experience in the design and development of ethernet bridging or routing related software or distributed systems software is desirable.
- Hands on experience with enterprise or service provider class Ethernet switch/router system software development, or significant PhD level research in the area of network routing and packet forwarding.
- Applied understanding of software engineering principles.
- Strong problem solving and software troubleshooting skills.
- Ability to design a solution to a small-sized problem, and implement that solution without outside help.Able to work on a small team solving a medium-sized problem with limited oversight.
Resources:
- Arista's Approach to Software with Ken Duda (CTO): https://youtu.be/TU8yNh5JCyw
- Additional information and resources can be found at https://www.arista.com/en/
Strong in Basic C++, STL, Linux
OOPs, Exception Handling
Design Pattern and Solid principles, concepts related to UML representation
• Solution, design, and architecture concepts
• Knowledge on Pointers and smart Pointers.
• IO streams, Files and Streams and Lambda Expressions in C++ added advantage.
• Features of C++17 and usage of STL in C++ is added advantage.
• Templates in C++.
Communication skill, Attitude, learnability
Roles and Responsibilities
- Managing Availability, Performance, Capacity of infrastructure and applications.
- Building and implementing observability for applications health/performance/capacity.
- Optimizing On-call rotations and processes.
- Documenting “tribal” knowledge.
- Managing Infra-platforms like Mesos/Kubernetes,CICD,Observability (Prometheus/New Relic/ELK),Cloud Platforms (AWS/ Azure),Databases,Data Platforms Infrastructure
- Providing help in onboarding new services with production readiness review process.
- Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
- Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
- Working with Dev team to have in depth understanding of the application architecture
and its bottlenecks.
- Identifying observability gaps in product services, infrastructure and working with stake
owners to fix it.
- Managing Outages and doing detailed RCA with developers and identifying ways to
avoid that situation.
- Managing/Automating upgrades of the infrastructure services.
- Automate toil work.
Experience & Skills
- 6+ years of total experience
- Experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
- A collaborative spirit with the ability to work across disciplines to influence, learn, and
deliver.
- A deep understanding of computer science, software development, and networking principles.
- Demonstrated experience with languages, such as Python, Java, Golang etc.
- Extensive experience with Linux administration and good understanding the various
linux kernel subsystems (memory, storage, network etc).
- Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
- Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and
- Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
- Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure
solutions like Microsoft Azure or Google Cloud.
- Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker,
Argo etc.
- Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
Work Location: Andheri East, Mumbai
Experience: 2-4 Years
About the Role:
At Bizongo, we believe in delivering excellence which drives business efficiency for our customers. As Software Engineer’s at Bizongo, you will be working on developing next generation of technology that will impact how businesses take care of their process and derive process excellence. We are looking for engineers who can bring fresh ideas, function at scale and are passionate about technology. We expect our engineers to be multidimensional, display leadership and have a zeal for learning as well as experimentation as we push business efficiency through our technology. As a DevOps Engineer, you should have hands-on experience as a DevOps engineer with strong technical proficiency on public clouds, Linux an programming/scripting.
Job Responsibilities:
Gather and analyse cloud infrastructure requirements
Automate obsessively
Support existing infrastructure, analyse problem areas and come up with solutions
Optimise stack performance and costs
Write code for new and existing tools
Must-haves:
Experience with DevOps techniques and philosophies
Passion to work in an exciting fast paced environment
Self-starter who can implement with minimal guidance
Good conceptual understanding of the building blocks of modern web-based infrastructure: DNS, TCP/IP, Networking, HTTP, SSL/TLS
Strong Linux skills
Experience with automation of code builds and deployments
Experience in nginx configurations for dynamic web application
Help with cost optimisations of infrastructure requirements
Assist development teams with any infrastructure needs
Strong command line skills to automate routine system administration tasks
An eye for monitoring. The ideal candidate should be able to look at complex infrastructure and be able to figure out what to monitor and how.
Databases: MySQL, PostgreSQL and cloud-based relational database solutions like Amazon RDS. Database replication and scalability
High Availability: Load Balancing (ELB), Reverse Proxies, CDNs etc.
Scripting Languages: Python/Bash/Shell/Perl
Version control with Git. Exposure to various branching workflows and collaborative development
Virtualisation and Docker
AWS core components (or their GCP/Azure equivalents) and their management: EC2, ELB, NAT, VPC, IAM Roles and policies, EBS and S3, CloudFormation, Elasticache, Route53, etc.
Configuration Management: SaltStack/Ansible/Puppet
CI/CD automation experience
Understanding of Agile development practices is a plus
Bachelor’s degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics, Physics, or a related field
Why work with us?
Opportunity to work with "India’s leading B2B" E-commerce venture. The company grew its revenue by more than 12x last year to reach to a 200 Cr annual revenue run rate scale in March 2018. We invite you to be part of the upcoming growth story of B2B sector through Bizongo
Having formed a strong base in Ecommerce, Supply Chain, Retail industries; exploring FMCG, Food Processing, Engineering, Consumer Durable, Auto Ancillary and Chemical industries
Design and Development launched for Bizongo recently and seeing tremendous growth as a steady and additional revenue stream
Opportunity to work with most dynamic individuals in Asia recognized under Forbes 30 Under 30 and industry stalwarts from across companies like Microsoft, Paypal, Gravitas, Parksons, ITC, Snapdeal, Fedex, Deloitte and HUL
Working in Bizongo translates into being a part of a dynamic start-up with some of the most enthusiastic, hardworking and intelligent people in a fast paced and electrifying environment
Bizongo has been awarded as the most Disruptive Procurement Startup of the year - 2017
Being a company that is expanding itself every day and working towards exploring newer avenues in the market, every employee grows with the company
The position provides a chance to build on existing talents, learn new skills and gain valuable experience in the field of Ecommerce
About the Company:
Company Website: https://www.bizongo.com/
Any solution worth anything is unfailingly preceded by clear articulation of a problem worth solving. Even a modest study of Indian Packaging industry would lead someone to observe the enormous fragmentation, chaos and rampant unreliability pervading the ecosystem. When businesses are unable to cope even with these basic challenges, how can they even think of materializing an eco-friendly & resource-efficient packaging economy? These are some hardcore problems with real-world consequences which our country is hard-pressed to solve.
Bizongo was conceived as an answer to these first level challenges of disorganization in the industry. We employed technology to build a business model that can streamline the packaging value-chain & has enormous potential to scale sustainably. Our potential to fill this vacuum was recognized early on by Accel Partners and IDG Ventures who jointly led our Series A funding. Most recently, B Capital group, a global tech fund led by Facebook co-founder Mr. Eduardo Savarin, invested in our technological capabilities when it jointly led our Series B funding with IFC.
The International Finance Corporation (IFC), the private-sector investment arm of the World Bank, cited our positive ecosystem impact towards the network of 30,000 SMEs operating in the packaging industry, as one of the core reasons for their investment decision. Beyond these bastions of support, we are extremely grateful to have found validation by various authoritative institutions including Forbes 30 Under 30 Asia. Being the only major B2B player in the country with such an unprecedented model has lent us enormous scope of experimentation in our efforts to break new grounds. Dreaming and learning together thus, we have grown from a team of 3, founded in 2015, to a 250+ strong family with office presence across Mumbai, Gurgaon and Bengaluru. So those who strive for opportunities to rise above their own limitations, who seek to build an ecosystem of positive change and to find remarkable solutions to challenges where none existed before, such creators would find a welcome abode in Bizongo.
Responsible for planning, connecting, designing, scheduling, and deploying data warehouse systems. Develops, monitors, and maintains ETL processes, reporting applications, and data warehouse design. |
Role and Responsibility |
· Plan, create, coordinate, and deploy data warehouses. · Design end user interface. · Create best practices for data loading and extraction. · Develop data architecture, data modeling, and ETFL mapping solutions within structured data warehouse environment. · Develop reporting applications and data warehouse consistency. · Facilitate requirements gathering using expert listening skills and develop unique simple solutions to meet the immediate and long-term needs of business customers. · Supervise design throughout implementation process. · Design and build cubes while performing custom scripts. · Develop and implement ETL routines according to the DWH design and architecture. · Support the development and validation required through the lifecycle of the DWH and Business Intelligence systems, maintain user connectivity, and provide adequate security for data warehouse. · Monitor the DWH and BI systems performance and integrity provide corrective and preventative maintenance as required. · Manage multiple projects at once. |
DESIRABLE SKILL SET |
· Experience with technologies such as MySQL, MongoDB, SQL Server 2008, as well as with newer ones like SSIS and stored procedures · Exceptional experience developing codes, testing for quality assurance, administering RDBMS, and monitoring of database · High proficiency in dimensional modeling techniques and their applications · Strong analytical, consultative, and communication skills; as well as the ability to make good judgment and work with both technical and business personnel · Several years working experience with Tableau, MicroStrategy, Information Builders, and other reporting and analytical tools · Working knowledge of SAS and R code used in data processing and modeling tasks · Strong experience with Hadoop, Impala, Pig, Hive, YARN, and other “big data” technologies such as AWS Redshift or Google Big Data
|