We are looking for Customer Success Specialist (B.Tech graduates) to assist our customers with product inquiries in a swift, proficient and friendly manner. Should be instrumental in customer retention by addressing concerns and product issues.
Required Candidate profile
Interact with customers by email, live chat, or calls
Experience/ Knowledge in HTML/CSS, App integrations, Using API keys, Work flows / automations, Browser extensions

Similar jobs
Who we are
At CoinCROWD, were building the next-gen wallet for real-world crypto utility. Our flagship product, CROWD Wallet, is secure, intuitive, gasless, and designed to bring digital currencies into everyday spending from a coffee shop to cross-border payments.
Were redefining the wallet experience for everyday users, combining the best of Web3 + AI to create a secure, scalable, and delightful platform
Were more than just a blockchain company, were an AI-native, crypto-forward startup. We ship fast, think long, and believe in building agentic, self-healing infrastructure that can scale across geographies and blockchains. If that excites you, lets talk.
What Youll Be Doing :
As the DevOps Lead at CoinCROWD, youll own our infrastructure from end to end, designing, deploying, and scaling secure systems to support blockchain transactions, AI agents, and token operations across global users.
You will :
- Lead the CI/CD, infra automation, observability, and multi-region deployments of CoinCROWD products.
- Manage cloud and container infrastructure using GCP, Docker, Kubernetes, Terraform.
- Deploy and maintain scalable, secure blockchain infrastructure using QuickNode, Alchemy, Web3Auth, and other Web3 APIs.
- Implement infrastructure-level AI agents or scripts for auto-scaling, failure prediction, anomaly detection, and alert management (using LangChain, LLMs, or tools like n8n).
- Ensure 99.99% uptime for wallet systems, APIs, and smart contract layers.
- Build and optimize observability across on-chain/off-chain systems using tools like Prometheus,
Grafana, Sentry, Loki, ELK Stack.
- Create auto-healing, self-monitoring pipelines that reduce human ops time via Agentic AI workflows.
- Collaborate with engineering and security teams on smart contract deployment pipelines, token rollouts, and app store release automation.
Agentic Ops : What it means
- Use GPT-based agents to auto-document infra changes or failure logs.
- Run LangChain agents that triage alerts, perform log analysis, or suggest infra optimizations.
- Build CI/CD workflows that self-update or auto-tune based on system usage.
- Integrate AI to detect abnormal wallet behaviors, fraud attempts, or suspicious traffic spikes
What Were Looking For :
- 5 to 10 years of DevOps/SRE experience, with at least 2 to 3 years in Web3, fintech, or high-scale infra.
- Deep expertise with Docker, Kubernetes, Helm, and cloud providers (GCP preferred).
- Hands-on with Terraform, Ansible, GitHub Actions, Jenkins, or similar IAC and pipeline tools.
- Experience maintaining or scaling blockchain infra (EVM nodes, RPC endpoints, APIs).
- Understanding of smart contract CI/CD, token lifecycle (ICO, vesting, etc.), and wallet integrations.
- Familiarity with AI DevOps tools, or interest in building LLM-enhanced internal tooling.
- Strong grip on security best practices, key management, and secrets infrastructure (Vault, SOPS, AWS KMS).
Bonus Points :
- You've built or run infra for a token launch, DEX, or high-TPS crypto wallet.
- You've deployed or automated a blockchain node network at scale.
- You've used AI/LLMs to write ops scripts, manage logs, or analyze incidents.
- You've worked with systems handling real-money movement with tight uptime and security requirements.
Why Join CoinCROWD :
- Equity-first model: Build real value as we scale.
- Be the architect of infrastructure that supports millions of real-world crypto transactions.
- Build AI-powered ops that scale without a 24/7 pager culture
- Work remotely with passionate people who ship fast and iterate faster.
- Be part of one of the most ambitious crossovers of AI + Web3 in 2025.
Please Apply - https://zrec.in/7EYKe?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Senior DevOps Engineer / SRE
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 4-6 years
Education: B.Tech/MCA
Notice Period: Immediately
About Us
At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.
Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.
We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.
Role Summary
We are seeking a Senior DevOps Engineer (SRE) to manage and optimize large-scale, mission-critical production systems. The ideal candidate will have a strong problem-solving mindset, extensive experience in troubleshooting, and expertise in scaling, automating, and enhancing system reliability. This role requires hands-on proficiency in tools like Kubernetes, Terraform, CI/CD, and cloud platforms (AWS, GCP, Azure), along with scripting skills in Python or Go. The candidate will drive observability and monitoring initiatives using tools like Prometheus, Grafana, and APM solutions (Datadog, New Relic, OpenTelemetry).
Strong communication, incident management skills, and a collaborative approach are essential. Experience in team leadership and multi-client engagement is a plus.
Ideal Candidate Profile
- Solid 4-6 years of experience as an SRE and DevOps with a proven track record of handling large-scale production environments
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- Strong Hands-on experience with managing Large Scale Production Systems
- Strong Production Troubleshooting Skills and handling high-pressure situations.
- Strong Experience with Databases (PostgreSQL, MongoDB, ElasticSearch, Kafka)
- Worked on making production systems more Scalable, Highly Available and Fault-tolerant
- Hands-on experience with ELK or other logging and observability tools
- Hands-on experience with Prometheus, Grafana & Alertmanager and on-call processes like Pagerduty
- Problem-Solving Mindset
- Strong with skills - K8s, Terraform, Helm, ArgoCD, AWS/GCP/Azure etc
- Good with Python/Go Scripting Automation
- Strong with fundamentals like DNS, Networking, Linux
- Experience with APM tools like - Newrelic, Datadog, OpenTelemetry
- Good experience with Incident Response, Incident Management, Writing detailed RCAs
- Experience with Applications best practices in making apps more reliable and fault-tolerant
- Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
- Able to manage multiple clients and take ownership of client issues.
- Experience with Git and coding best practices
Good to have
- Team-leading Experience
- Multiple Client Handling
- Requirements gathering from clients
- Good Communication
Key Responsibilities
- Design and Development:
- Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
- Collaborate with product and engineering teams to translate business requirements into technical specifications.
- Write clean, maintainable, and efficient code, following best practices and coding standards.
- Cloud Infrastructure:
- Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
- Implement and manage CI/CD pipelines for automated deployment and testing.
- Ensure the security, reliability, and performance of cloud infrastructure.
- Technical Leadership:
- Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
- Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
- Lead technical discussions and contribute to architectural decisions.
- Problem Solving and Troubleshooting:
- Identify, diagnose, and resolve complex software and infrastructure issues.
- Perform root cause analysis for production incidents and implement preventative measures.
- Continuous Improvement:
- Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
- Contribute to the continuous improvement of development processes, tools, and methodologies.
- Drive innovation by experimenting with new technologies and solutions to enhance the platform.
- Collaboration:
- Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
- Communicate effectively with stakeholders, including technical and non-technical team members.
- Client Interaction & Management:
- Will serve as a direct point of contact for multiple clients.
- Able to handle the unique technical needs and challenges of two or more clients concurrently.
- Involve both direct interaction with clients and internal team coordination.
- Production Systems Management:
- Must have extensive experience in managing, monitoring, and debugging production environments.
- Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.

- Responsible for designing, storing, processing, and maintaining of large-scale data and related infrastructure.
- Can drive multiple projects both from operational and technical standpoint.
- Ideate and build PoV or PoC for new product that can help drive more business.
- Responsible for defining, designing, and implementing data engineering best practices, strategies, and solutions.
- Is an Architect who can guide the customers, team, and overall organization on tools, technologies, and best practices around data engineering.
- Lead architecture discussions, align with business needs, security, and best practices.
- Has strong conceptual understanding of Data Warehousing and ETL, Data Governance and Security, Cloud Computing, and Batch & Real Time data processing
- Has strong execution knowledge of Data Modeling, Databases in general (SQL and NoSQL), software development lifecycle and practices, unit testing, functional programming, etc.
- Understanding of Medallion architecture pattern
- Has worked on at least one cloud platform.
- Has worked as data architect and executed multiple end-end data engineering project.
- Has extensive knowledge of different data architecture designs and data modelling concepts.
- Manages conversation with the client stakeholders to understand the requirement and translate it into technical outcomes.
Required Tech Stack
- Strong proficiency in SQL
- Experience working on any of the three major cloud platforms i.e., AWS/Azure/GCP
- Working knowledge of an ETL and/or orchestration tools like IICS, Talend, Matillion, Airflow, Azure Data Factory, AWS Glue, GCP Composer, etc.
- Working knowledge of one or more OLTP databases (Postgres, MySQL, SQL Server, etc.)
- Working knowledge of one or more Data Warehouse like Snowflake, Redshift, Azure Synapse, Hive, Big Query, etc.
- Proficient in at least one programming language used in data engineering, such as Python (or Scala/Rust/Java)
- Has strong execution knowledge of Data Modeling (star schema, snowflake schema, fact vs dimension tables)
- Proficient in Spark and related applications like Databricks, GCP DataProc, AWS Glue, EMR, etc.
- Has worked on Kafka and real-time streaming.
- Has strong execution knowledge of data architecture design patterns (lambda vs kappa architecture, data harmonization, customer data platforms, etc.)
- Has worked on code and SQL query optimization.
- Strong knowledge of version control systems like Git to manage source code repositories and designing CI/CD pipelines for continuous delivery.
- Has worked on data and networking security (RBAC, secret management, key vaults, vnets, subnets, certificates)


Job Responsibilities
- .Net framework
- C#
- SQL Server
- HTML
- CSS
- jQuery
- Javascript
- API web integration
- Mobile Application experience will be an added advantage
Required Candidate Profile
- Candidate needs to have good communication skills
- Ready to learn new technologies
- Candidates need to have good analytics skills
- Problem-solving skills
- Effectively communicate with team members
Perks and Benefits
- Day Shift
- Saturdays WFH
- Provident Fund
- Health Insurance
- Employee engagement activities
Job Summary:
The Department of Computer Science at Ace Engineering College is seeking qualified candidates for a faculty position in Computer Science at the Assistant Professor level. The successful candidate will be expected to engage in high-quality teaching, research, and service activities within the department and contribute to the academic community.
Responsibilities:
- As a faculty member of ACE, you will be responsible for the success of the students in your cohort. Therefore, you will be expected to understand the strengths and weaknesses of your cohort and adjust the pace of teaching accordingly.
- Timely delivery of lectures / conducting tutorials/training sessions by the program schedule
- Need to take the classes based on the schedule and should involve in all the academic operations.(Labs, Exams, Question paper creation).
- Provide mentorship and guidance to students, fostering a positive and inclusive learning environment.
- Contribute to curriculum development and enhancement of educational programs.
- Serve on departmental and university committees.
- Contribute to the academic and professional community through participation in conferences, workshops, and seminars.
- Provide academic advising and support to students.
- Engage in outreach activities to promote computer science education and awareness.
Qualification -
B. Tech - CSE, M. Tech - CSE (both are >60% Mandatory)
-Strong logic and analytical skills.
-Experience with Node.js, Javascript, HTML5, CSS3.
-Database Experience - Postgres or other relational databases.
-Proficient understanding of code versioning tools, such as Git.
- Experience with performance debugging and benchmarking
- Integration of user-facing elements developed by front-end developers with server-side logic
- Must understand the project requirement and technical documents.
- Ability to complete the task within a specified time with full accuracy.
- Team player with organizational skills. Need to perform programming work in the mobile application as per requirement.



