23+ Kibana Jobs in India
Apply to 23+ Kibana Jobs on CutShort.io. Find your next job, effortlessly. Browse Kibana Jobs and apply today!
Role - Performance tester
Location – Hyderabad
Budget – only above 6.5 years (at least 5 years on load runner / strong in load runner) 6.5 to 10 years
Compensation - between 13 to 17 LPA
We need associates to be good in Load Runner both. Below is JD.
- Ability to work independently on project performance testing with minimal supervision.
- should have Performance Testing strong skills with working experience on API testing and Web Applications.
- Understanding of the requirements / architecture and NFR.
- Good experience in performance testing using MicroFocus – Performance Centre(Load Runner)
- Able to understand the complex architecture of the product developed in various technologies like java, springboot, PCF/AWS Cloud Technologies, etc..
- Hands-on experience of developing scripts, designing test scenarios, test execution, monitoring, analyzing results
- Participate in the development and reporting of test metrics; items such as test confidence and test coverage reports.
- Experience in setting up/designing load scenario for various tests like load test, endurance test, stress test, capacity measurements etc.
- Experience in profiling and monitoring tools like AppDynamics, Grafana, Kibana
- Monitor application logs to determine system behaviour. Address all technical issues;
- facilitate the resolution and necessary follow-up with Development and other cross- functional departments.
- Analyse the CPU Utilization, Memory usage, Garbage Collection, and DB Monitoring - Parameters and DB Reports to verify the performance of the applications
- Experience in Setting up Test Environments, Debugging Environment issues.
- Experience in Identifying H/W and S/W Needs for setting up Test Environment Knowledge of sound development practices with exposure to Agile development, release engineering and continuous integration is a plus.
at PortOne
PortOne is re−imagining payments in Korea and other international markets. We are a Series B funded startup backed by prominent VC firms Softbank and Hanwa Capital
PortOne provides a unified API for merchants to integrate with and manage all of the payment options available in Korea and SEA Markets - Thailand, Singapore, Indonesia etc. It's currently used by 2000+ companies and processing multi-billion dollars in annualized volume. We are building a team to take this product to international markets, and looking for engineers with a passion for fintech and digital payments.
Culture and Values at PortOne
- You will be joining a team that stands for Making a difference.
- You will be joining a culture that identifies more with Sports Teams rather than a 9 to 5 workplace.
- This will be remote role that allows you flexibility to save time on commute
- Your will have peers who are/have
- Highly Self Driven with A sense of purpose
- High Energy Levels - Building stuff is your sport
- Ownership - Solve customer problems end to end - Customer is your Boss
- Hunger to learn - Highly motivated to keep developing new tech skill sets
Who you are ?
* You are an athlete and Devops/DevSecOps is your sport.
* Your passion drives you to learn and build stuff and not because your manager tells you to.
* Your work ethic is that of an athlete preparing for your next marathon. Your sport drives you and you like being in the zone.
* You are NOT a clockwatcher renting out your time, and NOT have an attitude of "I will do only what is asked for"
* Enjoys solving problems and delight users both internally and externally
* Take pride in working on projects to successful completion involving a wide variety of technologies and systems
* Posses strong & effective communication skills and the ability to present complex ideas in a clear & concise way
* Responsible, self-directed, forward thinker, and operates with focus, discipline and minimal supervision
* A team player with a strong work ethic
Experience
* 2+ year of experience working as a Devops/DevSecOps Engineer
* BE in Computer Science or equivalent combination of technical education and work experience
* Must have actively managed infrastructure components & devops for high quality and high scale products
* Proficient knowledge and experience on infra concepts - Networking/Load Balancing/High Availability
* Experience on designing and configuring infra in cloud service providers - AWS / GCP / AZURE
* Knowledge on Secure Infrastructure practices and designs
* Experience with DevOps, DevSecOps, Release Engineering, and Automation
* Experience with Agile development incorporating TDD / CI / CD practices
Hands on Skills
* Proficient in atleast one high level Programming Language: Go / Java / C
* Proficient in scripting - bash scripting etc - to build/glue together devops/datapipeline workflows
* Proficient in Cloud Services - AWS / GCP / AZURE
* Hands on experience on CI/CD & relevant tools - Jenkins / Travis / Gitops / SonarQube / JUnit / Mock frameworks
* Hands on experience on Kubenetes ecosystem & container based deployments - Kubernetes / Docker / Helm Charts / Vault / Packer / lstio / Flyway
* Hands on experience on Infra as code frameworks - Terraform / Crossplane / Ansible
* Version Control & Code Quality: Git / Github / Bitbucket / SonarQube
* Experience on Monitoring Tools: Elasticsearch / Logstash / Kibana / Prometheus / Grafana / Datadog / Nagios
* Experience with RDBMS Databases & Caching services: Postgres / MySql / Redis / CDN
* Experience with Data Pipelines/Worflow tools: Airflow / Kafka / Flink / Pub-Sub
* DevSecOps - Cloud Security Assessment, Best Practices & Automation
* DevSecOps - Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Preferrable to have Devops/Infra Experience for products in Payments/Fintech domain - Payment Gateways/Bank integrations etc
What will you do ?
Devops
* Provisioning the infrastructure using Crossplane/Terraform/Cloudformation scripts.
* Creating and Managing the AWS EC2, RDS, EKS, S3, VPC, KMS and IAM services, EKS clusters & RDS Databases.
* Monitor the infra to prevent outages/downtimes and honor our infra SLAs
* Deploy and manage new infra components.
* Update and Migrate the clusters and services.
* Reducing the cloud cost by enabling/scheduling for less utilized instances.
* Collaborate with stakeholders across the organization such as experts in - product, design, engineering
* Uphold best practices in Devops/DevSecOps and Infra management with attention to security best practices
DevSecOps
* Cloud Security Assessment & Automation
* Modify existing infra to adhere to security best practices
* Perform Threat Modelling of Web/Mobile applications
* Integrate security testing tools (SAST, DAST) in to CI/CD pipelines
* Incident management and remediation - Monitoring security incidents, recovery from and remediation of the issues
* Perform frequent Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Ensure the environment is compliant to CIS, NIST, PCI etc.
Here are examples of apps/features you will be supporting as a Devops/DevSecOps Engineer
* Intuitive, easy-to-use APIs for payment process.
* Integrations with local payment gateways in international markets.
* Dashboard to manage gateways and transactions.
* Analytics platform to provide insights
Job Title: ELK Stack Engineer
Position Overview:
We are seeking a skilled and experienced Senior Elasticsearch Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining data pipelines using the ELK Stack (Elasticsearch 7.1 or higher, Logstash, Kibana). The Senior Elasticsearch Developer will play a key role in managing large datasets, optimizing complex queries, and maintaining self-hosted Elastic Search clusters.
Responsibilities:
- Design, develop, and maintain data pipelines using the ELK Stack.
- Handle large datasets (3 billion+ records, 100+ fields per index) with efficiency and accuracy.
- Develop and optimize complex Elasticsearch queries for efficient data retrieval and analysis.
- Manage and maintain self-hosted Elastic Search clusters.
- Implement DevOps practices, including containerization with Docker.
- Perform backups, disaster recovery, and index migrations to ensure data security and integrity.
- Execute data processing and cleaning techniques, including writing custom scripts to extract and transform data into usable formats.
- Collaborate with the engineering team to integrate Elastic Search with other systems.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field.
- Minimum 5+ years of experience in designing, developing, and maintaining data pipelines using the ELK Stack.
- Hands-on experience handling large datasets (3 million+ records, 50+ fields per index).
- Strong proficiency in developing and optimizing complex Elasticsearch queries.
- Experience managing and maintaining self-hosted ElasticSearch clusters.
- Proficiency in DevOps practices, including containerization with Docker.
- Knowledge of security best practices, including backups, disaster recovery, and index migrations.
- Experience with database administration and data processing techniques.
- Excellent communication and teamwork skills.
Benefits:
- Competitive salary and benefits package between 20 to 35 LPA.
- Opportunities for professional development and growth.
- Collaborative and inclusive work environment.
- Flexible work hours and remote work options.
EXPERTISE AND QUALIFICATIONS
- 14+ years of experience in Software Engineering with at least 6+ years as a Lead Enterprise Architect preferably in a software product company
- High technical credibility - ability to lead technical brainstorming, take decisions and push for the best solution to a problem
- Experience in architecting Microservices based E2E Enterprise Applications
- Experience in UI technologies such as Angular, Node.js or Fullstack technology is desirable
- Experience with NoSQL technologies (MongoDB, Neo4j etc.)
- Elastic Search, Kibana, ELK, Logstash.
- Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc.
- Exposure in SaaS cloud-based platform.
- Experience on Docker, Kubernetes etc.
- Experience in planning, designing, developing and delivering Enterprise Software using Agile Methodology
- Key Programming Skills: Java, J2EE with cutting edge technologies
- Hands-on technical leadership with proven ability to recruit and mentor high performance talents including Architects, Technical Leads, Developers
- Excellent team building, mentoring and coaching skills are a must-have
- A proven track record of consistently setting and achieving high standards
Five Reasons Why You Should Join Zycus
1. Cloud Product Company: We are a Cloud SaaS Company, and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React.
2. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites.
3. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization
4. Get a Global Exposure: You get to work and deal with our global customers.
5. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features.
About Us
Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users.
Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-
to-use user interface ensures high adoption and value across the organization.
Start your #CognitiveProcurement journey with us, as you are #MeantforMore
About the Role
As a result of our rapid growth, we are looking for a Java Backend Engineer to join our existing Cloud Engineering team and take the lead in the design and development of several key initiatives of our existing Miko3 product line as well as our new product development initiatives.
Responsibilities
- Designing, developing and maintaining core system features, services and engines
- Collaborating with a cross functional team of the backend, Mobile application, AI, signal processing, robotics Engineers, Design, Content, and Linguistic Team to realize the requirements of a conversational social robotics platform which includes investigate design approaches, prototype new technology, and evaluate technical feasibility
- Ensure the developed backend infrastructure is optimized for scale and responsiveness
- Ensure best practices in design, development, security, monitoring, logging, and DevOps adhere to the execution of the project.
- Introducing new ideas, products, features by keeping track of the latest developments and industry trends
- Operating in an Agile/Scrum environment to deliver high quality software against aggressive schedules
Requirements
- Proficiency in distributed application development lifecycle (concepts of authentication/authorization, security, session management, load balancing, API gateway), programming techniques and tools (application of tested, proven development paradigms)
- Proficiency in working on Linux based Operating system.
- Proficiency in at least one server-side programming language like Java. Additional languages like Python and PHP are a plus
- Proficiency in at least one server-side framework like Servlets, Spring, java spark (Java).
- Proficient in using ORM/Data access frameworks like Hibernate,JPA with spring or other server-side frameworks.
- Proficiency in at least one data serialization framework: Apache Thrift, Google ProtoBuffs, Apache Avro,Google Json,JackSon etc.
- Proficiency in at least one of inter process communication frameworks WebSocket's, RPC, message queues, custom HTTP libraries/frameworks ( kryonet, RxJava ), etc.
- Proficiency in multithreaded programming and Concurrency concepts (Threads, Thread Pools, Futures, asynchronous programming).
- Experience defining system architectures and exploring technical feasibility tradeoffs (architecture, design patterns, reliability and scaling)
- Experience developing cloud software services and an understanding of design for scalability, performance and reliability
- Good understanding of networking and communication protocols, and proficiency in identification CPU, memory & I/O bottlenecks, solve read & write-heavy workloads.
- Proficiency is concepts of monolithic and microservice architectural paradigms.
- Proficiency in working on at least one of cloud hosting platforms like Amazon AWS, Google Cloud, Azure etc.
- Proficiency in at least one of database SQL, NO-SQL, Graph databases like MySQL, MongoDB, Orientdb
- Proficiency in at least one of testing frameworks or tools JMeter, Locusts, Taurus
- Proficiency in at least one RPC communication framework: Apache Thrift, GRPC is an added plus
- Proficiency in asynchronous libraries (RxJava), frameworks (Akka),Play,Vertx is an added plus
- Proficiency in functional programming ( Scala ) languages is an added plus
- Proficiency in working with NoSQL/graph databases is an added plus
- Proficient understanding of code versioning tools, such as Git is an added plus
- Working Knowledge of tools for server, application metrics logging and monitoring and is a plus Monit, ELK, graylog is an added plus
- Working Knowledge of DevOps containerization utilities like Ansible, Salt, Puppet is an added plus
- Working Knowledge of DevOps containerization technologies like Docker, LXD is an added plus
- Working Knowledge of container orchestration platform like Kubernetes is an added plus
- Understand fundamental design principles and best practices for developing backend servers and web applications Gather requirements, scope functionality, estimate and translate those requirements into solutions.
- Implement and integrate software features as per requirements.
- Deliver across the entire app life cycle.
- Work in a product creation project and/or technology project with implementation or integration responsibilities, Improve an existing code base, if required, and ability to read source code to understand data flow and origin
- Design effective data storage for the task at hand and know how to optimize query performance along the way.
- Follow an agile methodology of development and delivery
- Strictly adhere to coding standards and internal practices; must be able to conduct review code Mentor and possibly lead junior developers
- Contribute towards innovation Performance optimization of apps
- Explain technologies and solutions to technical and non-technical stakeholders
- Diagnose bugs and other issues in products
- Continuously discover, evaluate, and implement new technologies to maximize development efficiency
Must have / Good to have:
- 4+ years experience with Core Python development Design and implementation of high-availability, and performant applications on Unix environment
- Good with multithreading and data structures
- Develop back-end components to improve responsiveness and overall performance
- Familiarity with database design, integration wiht applications and python packaging. Familiarity with front-end technologies (like JavaScript and HTML5), REST API, security considerations
- Familiarity with functional testing and deployment automation frameworks
- Experience in development for 3-4 production ready application using Python as programming language
- Experience in writing unit test cases including positive and negative test cases
- Experience of CI/CD pipeline code deployment (Git, SVN, Jenkins or Teamcity)
- Experience with Agile and DevOps methodology
- Very good problem-solving skills
- Experience with Web technologies is a plus
- Experience with ELK stack is a plus.
Senior Software Engineer (Java)
We are looking for a Senior Software Engineer - Java
If you're a creative problem solver who is eager to develop amazing products and hungry for a new adventure, a word class workplace is waiting for you in the heart of Kolkata.
An exhaustive list of expectations :
✓ Design and implement cutting-edge applications
✓ Participate in code reviews and application debugging and diagnosis.
✓ Practice modern software development methodologies such as continuous delivery, and scrum.
✓ Collaborate with product managers and engineers to build scalable systems enabling innovative ordering experiences.
Requirements -
✓ Knowledge and 5+ years of experience in developing Java applications
✓ A completed technical degree in Computer Science or any related fields.
✓ Profound knowledge and working experience with Java Frameworks (Springboot) and SQL databases.
✓ Solid experience in the design and implementation of Restful APIs and design patterns.
✓ Strong knowledge in Core Java, REST , Spring Framework, Spring Boot Microservice , JPA (e.g. Hibernate, OpenJPA, etc.) , Docker, Jenkins, ELK Stack
✓ Experience working with NoSQL Technologies and interest in Elasticsearch, and Microservices architectures.
✓ Curiosity, out of box problem-solving abilities, and an eye for detail.
✓ Passion for clean code
What really makes us interested in you - You are self-motivated. You think like an entrepreneur, constantly innovating and driving positive change, but more importantly, you consistently deliver stupendous results.
Number of position – 3
Job Location – kolkata, Sector 5
DESIRED SKILLS AND EXPERIENCE
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux
o Networking knowledge – Firewalls, VPNs, Load Balancers
o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana
o Git, Jenkins, Jira
Roles and Responsibilities:
Perform detailed feature requirements analysis along with a team of Senior Developers,
define system functionality, work on system design and document the same
● Design/Develop/Improve Cogno AI’s backend infrastructure and stack and build faulttolerant, scalable and real-time distributed system
● Own the design, development and deployment of code to improve product and platform
functionality
● Taking initiative and giving ideas for improving the processes in the technology team
would lead to better performance of the team and result in robust solutions
● Writing high-performance, reliable and maintainable code
● Support team with timely analysis and debugging of operational issues
● Emphasis on automation and scripting
● Cross-functional communication to deliver projects
● Mentor junior team members technically and manage a team of software engineers
● Taking interviews and making tests for hiring people in the technology team
What do we look for?
The following are the important eligibility requirements for this Job:
● Bachelor's or Master's degree in computer science or equivalent.
● 5+ years of experience working as a software engineer, preferably in a product-based
company.
● Experience working with major cloud solutions AWS (preferred), Azure, and GCP.
● Familiarity with 3-Tier, microservices architecture and distributed systems
● Experience with the design & development of RESTful services
● Experience with developing Linux-based applications, networking and scripting.
● Experience with different data stores, data modelling and scaling them
● Familiarity with data stores such as PostgreSQL, MySQL, Mongo-DB etc.
● 4+ years of experience with web frameworks (preferably Django, Flask etc.)
● Good understanding of data structures, multi-threading and concurrency concepts.
● Experience with DevOps tools like Jenkins, Ansible, Kubernetes, and Git is a plus.
● Familiarity with elastic search queries and visualization tools like grafana, kibana
● Strong networking fundamentals: Firewalls, Proxies, DNS, Load Balancing, etc.
● Strong analytical and problem-solving skills.
● Excellent written and verbal communication skills.
● Team player, flexible and able to work in a fast-paced environment.
● End-to-end ownership of the product. You own what you develop.
POSITION SUMMARY:
We are looking for a passionate, high energy individual to help build and manage the infrastructure network that powers the Product Development Labs for F5 Inc. The F5 Infra Engineer plays a critical role to our Product Development team by providing valuable services and tools for the F5 Hyderabad Product Development Lab. The Infra team supports both production systems and customized/flexible testing environments used by Test and Product Development teams. As an Infra Engineer, you ’ll have the opportunity to work with cutting-edge technology and work with talented individuals. The ideal candidate will have experience in Private and Public Cloud – AWS-AZURE-GCP, OpenStack, storage, Backup, VMware, KVM, XEN, HYPER-V Hypervisor Server Administration, Networking and Automation in Data Center Operations environment at a global enterprise scale with Kubernetes, OpenShift Container Flatforms.
EXPERIENCE
7- 9+ Years – Software Engineer III
PRIMARY RESPONSIBILITIES:
-
Drive the design, Project Build, Infrastructure setup, monitoring, measurements, and improvements around the quality of services Provided, Network and Virtual Instances service from OpenStack, VMware VIO, Public and private cloud and DevOps environments.
-
Work closely with the customers and understand the requirements and get it done on timelines.
-
Work closely with F5 architects and vendors to understand emerging technologies and F5 Product Roadmap and how they would benefit the Infra team and its users.
-
Work closely with the Team and complete the deliverables on-time
-
Consult with testers, application, and service owners to design scalable, supportable network infrastructure to meet usage requirements.
-
Assume ownership for large/complex systems projects; mentor Lab Network Engineers in the best practices for ongoing maintenance and scaling of large/complex systems.
-
Drive automation efforts for the configuration and maintainability of the public/private Cloud.
-
Lead product selection for replacement or new technologies
-
Address user tickets in a timely manner for the covered services
-
Responsible for deploying, managing, and supporting production and pre-production environments for our core systems and services.
-
Migration and consolidations of infrastructure
-
Design and implement major service and infrastructure components.
-
Research, investigate and define new areas of technology to enhance existing service or new service directions.
-
Evaluate performance of services and infrastructure; tune, re-evaluate the design and implementation of current source code and system configuration.
-
Create and maintain scripts and tools to automate the configuration, usability and troubleshooting of the supported applications and services.
-
Ability to take ownership on activities and new initiatives.
-
Infra Global Support from India towards product Development teams.
-
On-call support on a rotational basis for a global turn-around time-zones
-
Vendor Management for all latest hardware and software evaluations keep the system up-to-date.
KNOWLEDGE, SKILLS AND ABILITIES:
-
Have an in-depth multi-disciplined knowledge of Storage, Compute, Network, DevOps technologies and latest cutting-edge technologies.
-
Multi-cloud - AWS, Azure, GCP, OpenStack, DevOps Operations
-
IaaS- Infrastructure as a service, Metal as service, Platform service
-
Storage – Dell EMC, NetApp, Hitachi, Qumulo and Other storage technologies
-
Hypervisors – (VMware, Hyper-V, KVM, Xen and AHV)
-
DevOps – Kubernetes, OpenShift, docker, other container and orchestration flatforms
-
Automation – Scripting experience python/shell/golan , Full Stack development and Application Deployment
-
Tools - Jenkins, splunk, kibana, Terraform, Bitbucket, Git, CI/CD configuration.
-
Datacenter Operations – Racking, stacking, cable matrix, Solution Design and Solutions Architect
-
Networking Skills – Cisco/Arista Switches, Routers, Experience on Cable matrix design and pathing (Fiber/copper)
-
Experience in SAN/NAS storage – (EMC/Qumulo/NetApp & others)
-
Experience with Red Hat Ceph storage.
-
A working knowledge of Linux, Windows, and Hypervisor Operating Systems and virtual machine technologies
-
SME - subject matter expert for all cutting-edge technologies
-
Data center architect professional & Storage Expert level Certified professional experience .
-
A solid understanding of high availability systems, redundant networking and multipathing solutions
-
Proven problem resolution related to network infrastructure, judgment, negotiating and decision-making skills along with excellent written and oral communication skills.
-
A Working experience in Object – Block – File storage Technologies
-
Experience in Backup Technologies and backup administration.
-
Dell/HP/Cisco UCS server’s administration is an additional advantage.
-
Ability to quickly learn and adopt new technologies.
-
A very very story experience and exposure towards open-source flatforms.
-
A working experience on monitoring tools Zabbix, nagios , Datadog etc ..
-
A working experience on and BareMetal services and OS administration.
-
A working experience on the cloud like AWS- ipsec, Azure - express route, GCP – Vpn tunnel etc.
-
A working experience in working using software define network like (VMware NSX, SDN, Openvswitch etc ..)
-
A working experience with systems engineering and Linux /Unix administration
-
A working experience with Database administration experience with PostgreSQL, MySQL, NoSQL
-
A working experience with automation/configuration management using either Puppet, Chef or an equivalent
-
A working experience with DevOps Operations Kubernetes, container, Docker, and git repositories
-
Experience in Build system process and Code-inspect and delivery methodologies.
-
Knowledge on creating Operational Dashboards and execution lane.
-
Experience and knowledge on DNS, DHCP, LDAP, AD, Domain-controller services and PXE Services
-
SRE experience in responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
-
Vendor support – OEM upgrades, coordinating technical support and troubleshooting experience.
-
Experience in handling On-call Support and hierarchy process.
-
Knowledge on scale-out and scale-in architecture.
-
Working experience in ITSM / process Management tools like ServiceNow, Jira, Jira Align.
-
Knowledge on Agile and Scrum principles
-
Working experience with ServiceNow
-
Knowledge sharing, transition experience and self-learning Behavioral.
at Dataweave Pvt Ltd
Period: 6 months+
JD:
● 4-7 years of experience building and scaling APIs and web applications.
● Experience building and managing large scale data/analytics systems.
● Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good
understanding of software design principles and architectural best practices.
● Be passionate about writing code and have experience coding in multiple languages, including at least
one scripting language, preferably Python.
● Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision
is right/wrong, and so on.
● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
● Have experience working with multiple storage and indexing technologies such as MySQL, Redis,
MongoDB, Cassandra, Elastic.
● Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
● Use the command line like a pro. Be proficient in Git and other essential software development tools.
● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
● Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana,
Graylog, StatsD, Datadog etc.
● Working knowledge of building websites and apps. Good understanding of integration complexities and
dependencies.
● Working knowledge linux server administration as well as the AWS ecosystem is desirable.
● It's a huge bonus if you have some personal projects (including open source contributions) that you work
on during your spare time. Show off some of your projects you have hosted on GitHub.
Intuitive cloud (http://www.intuitive.cloud">www.intuitive.cloud) is one of the fastest growing top-tier Cloud Solutions and SDx Engineering solution and service company supporting 80+ Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is a recognized professional and manage service partner for core superpowers in cloud(public/ Hybrid), security, GRC, DevSecOps, SRE, Application modernization/ containers/ K8 -as-a- service and cloud application delivery
Experience managing, supporting and deploying network infrastructures
Strong ability to diagnose server or network alerts, events or issues
Understanding of common information architecture framework
Perform proactive monitoring and alerting with using ticketing platform and reporting
Ability to perform software upgrades, server and 3rd party application patches, etc.
Perform health-check and proactive maintenance of all the infrastructure devices
Good time management and organizational skills
Ability to handle multiple concurrent tasks and projects with minimal supervision
Knowledge of project management methodologies and techniques
Good verbal as well as written communication skills
Ability to work in a flexible schedule for Rotational 12hrs shift (as per roaster)
Previous customer service or helpdesk experience preferred
We are looking for a Director of Engineering to lead one of our key product engineering teams. This role will report directly to the VP of Engineering and will be responsible for successful execution of the company's business mission through development of cutting-edge software products and solutions.
- As an owner of the product you will be required to plan and execute the product road map and provide technical leadership to the engineering team.
- You will have to collaborate with Product Management and Implementation teams and build a commercially successful product.
- You will be responsible to recruit & lead a team of highly skilled software engineers and provide strong hands on engineering leadership.
- Requirement deep technical knowledge in Software Product Engineering using Java/J2EE, Node.js, React.js, fullstack, NosqlDB, mongodb, cassandra, neo4j, elastic search, kibana, elk, kafka, redis, docker, kubernetes, apache, solr, activemq, rabbitmq, spark, scala, sqoop, hbase, hive, websocket, webcrawler, springboot, etc. is a must
Requirements
16+ years of experience in Software Engineering with at least 5+ years as an engineering leader in a software product company.
- Hands-on technical leadership with proven ability to recruit high performance talent
- High technical credibility - ability to audit technical decisions and push for the best solution to a problem.
- Experience building E2E Application right from backend database to persistent layer.
- Experience UI technologies Angular, react.js, Node.js or fullstack environment will be preferred.
- Experience with NoSQL technologies (MongoDB, Cassandra, Neo4j, Dynamodb, etc.)
- Elastic Search, Kibana, ELK, Logstash.
- Experience in developing Enterprise Software using Agile Methodology.
- Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc.
- SaaS cloud-based platform exposure.
- Experience on Docker, Kubernetes etc.
- Ownership E2E design development and also quality enterprise product/application deliverable exposure
- A track record of setting and achieving high standards
- Strong understanding of modern technology architecture
- Key Programming Skills: Java, J2EE with cutting edge technologies
- Excellent team building, mentoring and coaching skills are a must-have
Benefits
Five Reasons Why You Should Join Zycus
- Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React.
- A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites.
- Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization
- Get a Global Exposure: You get to work and deal with our global customers.
- Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features.
About Us
Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users.
Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization.
Start your #CognitiveProcurement journey with us, as you are #MeantforMore.
Click here to Apply :
https://apply.workable.com/zycus-1/j/D926111745/">Director of Engineering - Zycus (workable.com) - Mumbai.
https://apply.workable.com/zycus-1/j/90665BFD4C/">Director of Engineering - Zycus (workable.com) - Bengaluru.
https://apply.workable.com/zycus-1/j/3A5FBA2C7C/">Director of Engineering - Zycus (workable.com) - Pune.
• Responsibilities:
o Should be able to work with API, shards etc in Elasticsearch.
o Write parser in Logstash
o Create Dashboards in Kibana
• Mandatory Experience.
o Must have very good understanding of Log Analytics
o Hands on experience in Elasticsearch, logstash & Kibana should be at expert level
o Elasticsearch : Should be able to write Kibana API
o Logstash : Should be able to write parsers.
o Kibana : Create different visualization and dashboards according to the Client needs
o Scripts : Should be able to write scripts in linux.
Should design and operate data pipe lines.
Build and manage analytics platform using Elastic search, Redshift, Mongo db.
Strong programming fundamentals in Datastructures and algorithms.
Technical Support Engineer, Velocity
Responsibilities
-
Interact with the Velocity customers via phone, email or video conference and by demonstrating the highest level of urgency, resolve customer issues in a timely manner.
-
Build and maintain excellent relationships with clients and achieve the highest level of customer satisfaction.
-
Work as part of our extended support team in a startup atmosphere, doing whatever it takes to exceed customer expectations
-
Be an Owner and respond to internal inquiries related to Velocitysystems, products and processes. Interact with multiple third-party vendors to resolve the issues
-
Understand the technical application of the Velocity suite of products and services
-
Partner with Customer Success, Sales, Engineering and Product teams to resolve sophisticated problems with potentially costly and far-reaching consequences
-
Write it Down: identify and create documentation that Empower Others
What do you need?
-
3+ years of relevant experience in Customer Success
-
Have an understanding of web applications, REST APIs, DB Systems
-
Experience with JavaScript, Ruby, Perl, or Python programming
-
Prior knowledge of DBMS, SQL
-
Past experience of using tools like Postman, Kibanna,
-
Strong analytical and problem solving skills.
-
Strong debugging skills. Instinctive ability to subdivide problems into basic components in order to efficiently pinpoint root cause of issues.
-
Team player with solid communication and presentation skills
-
Good interpersonal communication and customer service skills are needed to work successfully with customers in high stress and or ambiguous situations
Java Elastic Search
- Java Microservices ELK stack(Elasticsearch, Logstash, Kibana, Beats, X-Pack, APM),
- Build Custom Reports
Java + NodeJS
- Java with Node.js- Microservices Development,
- Using graphql with node.js,
- Backend development with Node.JS
Java Microservices
- Working experience in Java and spring boot (microservices)
- Strong understanding of OOPs concepts
- Proficient in API design and development using RESTful services
- Knowledge and understanding of design principles behind building a scalable application
- Source code management using git
Name - http://www.savvologygames.com">Savvology Games
Location - WeWork, Spectrum Towers, Malad, Mumbai
About Savvology - There is a lot of luck involved in most of the mainstream real-money games (Poker, Fantasy, Ludo, Rummy) which makes it hard to win anything. We want to create pure skill-based strategy games (no luck factor), where you can control your outcome better. What’s more, more than 50% of the players win! We make use of Game Theory (the science of decision-making) to come up with games that are so simple that even a 5-year-old can play. Our vision is to encourage people to make better decisions through gamification.
Job Brief
We are looking for an experienced and self-driven Backend Developer to join our young team. You will be responsible for all the server-side logic of our games, ensuring a smooth, glitch-free gaming experience. You have to minimize downtime, crashes, and other bugs and capture & store data correctly for further analytics. You will be working alongside a dynamic team of IITian founders, engineers, designers, and marketers, so there is a lot of scope for learning and growth.
We’re a young, passionate team looking to make a dent in the gaming space. We work hard, but we party harder (beer/pizza every Friday night). If you love challenges, playing games, and building innovative products from scratch, we’d like to meet you.
Responsibilities
- Build robust backend systems that do not break down.
- Write clean and reusable code for future use and ease of understanding
- Collaborate with front-end developers to integrate user-facing elements with server-side logic
- Give inputs on how to develop new features in the most cost and effort-economical way
- Implementation of security standards, data protection, and privacy protocols
- We want to be a data-driven company. Gather data (events, clicks, behaviors) to facilitate analytics by the Product and Marketing teams
- Create dashboards where all key metrics and data is visible at a glance
- Implement and maintain an SEO-friendly website
Skills
- 3+ years of proven work experience as a Full-Stack / Back-end developer
- 2+ years of NodeJS experience
- 1+ years of ReactJS experience
- Proficient in MySQL and database design and schemas
- Familiarity with ELK stack. Hands-on project experience will be preferred.
- Website development. Should be well-versed in HTML / CSS / Javascript
- Ability to understand business requirements and translate them into technical requirements
Requirements
- Should be a team player and have a problem-solving attitude.
- Should show initiative and be opinionated. We don’t want a yes-man. [MUST]
- Computer Science or a relevant degree from a reputed university is a plus but NOT mandatory
- Brownie points if you’ve worked on / developed a mobile game before
What you can expect at Savvology Games
- A cool work atmosphere complete with flexible working hours, unlimited coffee, fruit water, games (Foosball, Table Tennis, Pool, Carrom, Board Games, Cards…). Time will fly.
- A hockey-stick trajectory for growth professionally
- Fun people who you’ll want to hang out with after office hours too :)
- Lots of treats - Did we mention beer/pizza every Friday?
European Bank headquartered at Copenhagen, Denmark.
Roles & Responsibilities
- Designing and delivering a best-in-class, highly scalable data governance platform
- Improving processes and applying best practices
- Contribute in all scrum ceremonies; assuming the role of ‘scum master’ on a rotational basis
- Development, management and operation of our infrastructure to ensure it is easy to deploy, scalable, secure and fault-tolerant
- Flexible on working hours as per business needs
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Being involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Ensuring reliable operation of CI/ CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creating Docker files
- Creating Bash/ Python scripts for automation.
- Performing root cause analysis for production errors.
What you need to have:
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Roles & Responsibilities
- Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
- Deep understanding of Linux from kernel mechanisms through user space management
- Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
- Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards. Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.
- Wide understanding of IP networking as well as data centre infrastructure
Skills
- Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
- Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
- Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
- Strong understanding and must have experience:
- Apache spark framework, specifically spark core and spark streaming,
- Orchestration platforms, mesos and kubernetes,
- Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
- Core presentation technologies kibana, and grafana.
- Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products
Certification
Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms
AI-driven automation tool for designers
✔ Defining the over all architecture and choose the best stack, components, and
subsystems for search & storage infrastructure
✔ Design & build user-friendly APIs for accessing the backend infrastructure
✔ Active mentoring on engineering best practices, reducing technical debt and
designing scale ready solutions.
✔ Proactively identify architectural gaps and enhancements and recommending
appropriate solutions.
✔ Work closely with the product and customer teams to effectively drive innovative
solutions and drive adoption of features.
Requirements
✔ Experience level 4+ years.
✔ A bachelor's or master’s degree in Computer Science/Software Engineering
✔ Production experience with a scalable search engine or building storage
infrastructure for scaled out consumer companies.
Blue Yonder (formerly JDA Software, Inc.) is the leading su
- ELK (Elasticsearch, Logstash Kibana) Administration and Implementation.
- Experience on implementing the Syslog NG on Unix/Linux Platform.
- Experiance in Information/Cyber Security