
Senior SRE Developer
The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability.
Responsibilities:
- ● Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health.
- ● Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role).
- ● Implement automated solutions for incident response, system optimization, and reliability improvement.
Requirements: Software Development:
- ● 3+ years of professional Python development experience.
- ● Strong grasp of Python object-oriented programming concepts and inheritance.
- ● Experience developing multi-threaded Python applications.
- ● 2+ years of experience using Terraform, with proficiency in creating modules and submodules
- from scratch.
- ● Proficiency or willingness to learn Golang.
- Operating Systems:
- ● Experience with Linux operating systems.
- ● Strong understanding of monitoring critical system health parameters.
- Cloud:
- ● 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS.
- ● AWS Associate-level certification or higher preferred. Networking:
● Basic understanding of network protocols: ○ TCP/IP
○ DNS
○ HTTP
○ Load balancing concepts
Additional Qualifications (Preferred):
● Familiarity with trading systems and low-latency environments is advantageous but not required.

About Wissen Technology
About
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Connect with the team
Similar jobs

About the Role
We are looking for a highly skilled Data Scientist with strong expertise in Machine Learning, MLOps, and Generative AI. The ideal candidate will have hands-on experience in building scalable ML models, deploying them in production, and working with modern AI frameworks, including GenAI technologies.
Key Responsibilities
· Design, develop, and deploy machine learning models for real-world business problems
· Work on end-to-end ML lifecycle: data preprocessing, model building, evaluation, deployment, and monitoring
· Implement and manage MLOps pipelines for scalable and reproducible workflows
· Utilize tools like MLflow for experiment tracking, model versioning, and lifecycle management
· Develop and integrate Generative AI (GenAI) solutions such as LLM-based applications
· Collaborate with cross-functional teams (engineering, product, business) to translate requirements into AI solutions
· Optimize model performance and ensure production stability
· Stay updated with the latest advancements in AI/ML and GenAI ecosystems
Required Skills & Qualifications
· 4+ years of experience in Data Science / Machine Learning
· Strong programming skills in Python
· Hands-on experience with ML modeling techniques (supervised, unsupervised, NLP, etc.)
· Solid understanding of MLOps practices and tools
· Experience with MLflow or similar model lifecycle tools
· Practical experience in Generative AI (GenAI), including working with LLMs
· Experience with libraries/frameworks like Scikit-learn, TensorFlow, PyTorch
· Strong understanding of data structures, algorithms, and statistics
· Experience with cloud platforms (AWS/GCP/Azure) is a plus
Good to Have
· Experience with LLM fine-tuning, prompt engineering, or RAG pipelines
· Exposure to Docker, Kubernetes, and CI/CD pipelines
· Knowledge of data engineering workflows
*Spinify*
*JD - Business Development Associate*
*Job Summary*
We're seeking a motivated Business Development Associate to drive sales growth and revenue expansion. During the probation period, you'll undergo a comprehensive one-week training program, receive relevant data from management, and be expected to sell at least two subscription plans to customers.
*Key Responsibilities*
1. Sell subscription plans to clients
2. Generate leads and convert them into sales
3. Increase revenue for the company
4. Engage with potential clients through calls and meetings
5. Meet minimum sales targets consistently
6. Do follow ups they will get leads from the management
*Compensation*
- Salary: ₹15,000 - ₹30,000 per month
- Incentives: Up to ₹60,000 for depends upon performance
Earn upto 80-1 lack according to the performance
*Probation Period Details*
- Duration: One week training
- Data Provision: We'll provide you with relevant customer data
- Expectations: Sell at least two subscription plans within the probation period
*Post-Probation Expectations*
- Payroll Commencement: Your salary will start once you successfully complete the probation period
- Ongoing Targets: You'll be expected to consistently meet sales targets and contribute to revenue growth
- Performance Evaluation: Your performance will be regularly evaluated to ensure you're meeting expectations
*Work Arrangement*
- Hybrid/Remote: Flexible work arrangement with options to work from home or office
If you're a driven and sales-focused individual, apply now to join our team!
Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote
proven implementation
experience of:
o C# and .Net framework
(version 4.5 or higher)
o Design Patterns and SOLID
principles
o Transact SQL and SQL Server
(2012 or higher)
• Good knowledge with applied
experience of:
o Unit and Integration testing
skills, including preparing
test scripts and execution
o NoSQL databases
(MongoDb)
• Competent knowledge with
proven implementation
experience of:
o Windows Presentation
Foundation (WPF)
o Windows Communication
Foundation (WCF)
• Excellent technical analysis
and investigatory skills
• Ability to work with both
business and IT staff in a
pressured environment
Roles and Responsibilities:
· Must have at least 1 year of experience in Android Studio, Java/Kotlin.
· Translate designs and wireframes into high quality code.
· Ensure the best possible performance, quality, and responsiveness of the application.
· Identify and correct bottlenecks and fix bugs.
· Help maintain code quality, organization, and automatization
- Strong knowledge of Android SDK, different versions of Android, and how to deal with different screen sizes
- Familiarity with RESTful APIs to connect Android applications to back-end services
- Strong knowledge of Android UI design principles, patterns, and best practices
- Experience with offline storage, threading, and performance tuning
- Ability to design applications around natural user interfaces, such as “touch”
- Knowledge of the open-source Android ecosystem and the libraries available for common tasks
- Ability to understand business requirements and translate them into technical requirements
- Familiarity with cloud message APIs and push notifications
- A knack for benchmarking and optimization
- Understanding of Google’s Android design principles and interface guidelines
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration.
Required Skills:
- Designing and developing test automation scripts.
- Using test automation guidelines.
- Researching issues in software through testing.
- Collaborating with QA Analysts and Software Developers to develop solutions.
- Keeping updated with the latest industry developments.
We are having urgent opening for IT Recruiter-Work from home. Minimum 2+ years experience required in IT and non-IT recruitment. Candidates who don't have experience in IT recruitment, please do not apply. Job involves end to end recruitment. 25 to 30k net salary. Immediate to 7 days joinees only. Personal desktop or laptop required.
Job Type: Full-time
Testsigma is built to make test automation effortless. We are a fast-growing, product-based startup backed by leading global Investors like Accel, STRIVE and Marquee Angels.
We are the first open-source test automation platform built out of India and competing with global giants. With product-market fit and a terrific team, we’re poised to capture the global market for AI driven, cloud-based test automation platforms.
The ideal candidate will have strong creative skills and a portfolio of work which demonstrates their passion for illustrative design and typography. This candidate will have experiences in working with numerous different design platforms such as digital and print forms.
Job Description:
As a Senior Account Executive, being part of an excellent sales team of A-Players, you will find an environment of immense learning & growth! Come & be a part of one of the fastest-growing global Dev Infra companies!
This will be a hands-on position in a typical start-up environment, so we are looking for a motivated self-starter who isn't afraid to roll up their sleeves and experiment quickly to achieve the goals.
What you will be doing:
- Consistently meet & exceed sales quota.
- Educate and nurture prospects, conduct a demo with each lead, share what Testsigma does, and employ a value-based solution selling methodology to drive these leads through a high-velocity pipeline.
- Execute all phases of the pipeline, and push deals through the sales cycle towards closure.
- Build a robust pipeline to meet quota consistently.
- Lead customers through an end-to-end sales cycle in collaboration with Pre-Sales and Support teams.
- Manage the entire sales lifecycle from customer engagement, POC and contract negotiation.
- Develop executive relationships to expand revenue potential.
- Articulate Testsigma's' value proposition clearly and effectively to potential clients while understanding the competitive landscape.
- Demonstrate a sound understanding of how the overall business solution is positioned, deployed and supported.
- Work with all levels of GTM leadership to continuously improve processes like territory planning, lead/pipeline/opportunity management and KPIs.
- Maintain excellent data discipline in salesforce.com for your book of business
- Responsible for global geography for now.
Qualifications:
- 6 to 10 years of experience selling complex technology products.
- Minimum 4 years in the end-to-end sales process, including closing as an AE
- Experience in driving B2B sales growth, preferably in a technology/product environment (in SaaS industry preferred), selling to a technical audience.
- End-to-end sales experience managing complex sales cycles requiring stakeholder mapping, running technical proofs-of-concept in collaboration with a Solutions Engineer, price negotiations.
- Experience selling Enterprise level deals, selling to C-suite
- Consistent success with sales quota attainment
- Exceptional communication, problem-solving and collaboration skills
- Experience managing the sales cycle from business champion to the C-level
- Strong prospecting skills
Note: Candidates from SAAS, Technoly product background prefered.
Territory : North America (Night shift)
Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.
Must Have Skills
- Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
- Leading in the identification, isolation, resolution and communication of problems within the production environment.
- Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
- Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
- Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
- Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
- Works on multiple platforms and multiple projects concurrently.
- Performs code and unit testing for complex scope modules, and projects
- Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
- Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
- Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
- Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.
- Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
- Working knowledge on Kafka Rest proxy.
- Ensure optimum performance, high availability and stability of solutions.
- Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
- Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.
- Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
- Ability to perform data related benchmarking, performance analysis and tuning.
- Strong skills in In-memory applications, Database Design, Data Integration.











