
Key Responsibilities
We are seeking an experienced Data Engineer with a strong background in Databricks, Python, Spark/PySpark and SQL to design, develop, and optimize large-scale data processing applications. The ideal candidate will build scalable, high-performance data engineering solutions and ensure seamless data flow across cloud and on-premise platforms.
Key Responsibilities:
- Design, develop, and maintain scalable data processing applications using Databricks, Python, and PySpark/Spark.
- Write and optimize complex SQL queries for data extraction, transformation, and analysis.
- Collaborate with data engineers, data scientists, and other stakeholders to understand business requirements and deliver high-quality solutions.
- Ensure data integrity, performance, and reliability across all data processing pipelines.
- Perform data analysis and implement data validation to ensure high data quality.
- Implement and manage CI/CD pipelines for automated testing, integration, and deployment.
- Contribute to continuous improvement of data engineering processes and tools.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proven experience as a Databricks with strong expertise in Python, SQL and Spark/PySpark.
- Strong proficiency in SQL, including working with relational databases and writing optimized queries.
- Solid programming experience in Python, including data processing and automation.

About Wissen Technology
About
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Connect with the team
Similar jobs
Key Responsibilities:
Design, develop, and maintain efficient, reusable, and reliable Go code.
Implement and integrate with back-end services, databases, and APIs.
Write clean, scalable, and testable code following best practices and design patterns.
Collaborate with cross-functional teams to define, design, and ship new features.
Optimize application performance for maximum speed and scalability.
Identify and address bottlenecks and bugs, and devise solutions to these problems.
Stay up-to-date with the latest industry trends, technologies, and best practices.
Required Qualifications:
Proven experience as a Golang Developer or similar role in software development.
Proficiency in Go programming language, paradigms, constructs, and idioms.
Experience with server-side development, microservices architecture, and RESTful APIs.
Familiarity with common Go frameworks and tools such as Gin.
Knowledge implementing monitoring, logging, and alerting systems
Experience with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Understanding of code versioning tools, such as Git.
Strong understanding of concurrency and parallelism in Go.
Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes) is a plus.
Excellent problem-solving skills and attention to detail.
Ability to work effectively both independently and as part of a team.
you can direct contact us: nine three one six one two zero one three two
Advanced SQL, data modeling skills - designing Dimensional Layer, 3NF, denormalized views & semantic layer, Expertise in GCP services
Role & Responsibilities:
● Design and implement robust semantic layers for data systems on Google Cloud Platform (GCP)
● Develop and maintain complex data models, including dimensional models, 3NF structures, and denormalized views
● Write and optimize advanced SQL queries for data extraction, transformation, and analysis
● Utilize GCP services to create scalable and efficient data architectures
● Collaborate with cross-functional teams to translate business requirements(specified in mapping sheets or Legacy
Datastage jobs) into effective data models
● Implement and maintain data warehouses and data lakes on GCP
● Design and optimize ETL/ELT processes for large-scale data integration
● Ensure data quality, consistency, and integrity across all data models and semantic layers
● Develop and maintain documentation for data models, semantic layers, and data flows
● Participate in code reviews and implement best practices for data modeling and database design
● Optimize database performance and query execution on GCP
● Provide technical guidance and mentorship to junior team members
● Stay updated with the latest trends and advancements in data modeling, GCP services, and big data technologies
● Collaborate with data scientists and analysts to enable efficient data access and analysis
● Implement data governance and security measures within the semantic layer and data model
Below are the High-level Job Description:
- New policy and use cases creation
- 24/7 Administration and management of in scoped security devices, dedicated support to implementing the changes (additions, modifications and deletions) for Security Devices Configurations as per the organization change management policy or process
- Participation in Change / Incident / Problem Management calls to comprehend, assess user-specific requirements and take appropriate actions if any
- Development of a comprehensive Plan of Action (POA) inclusive of a rollback strategy as a precautionary measure for any change implementation, migration and upgrade
- Precise execution of scheduled configuration changes within the specified timeframe as per the organization policy or process.
- Sanity should be performed thoroughly after any changes, modification, upgrade, migration and rollback. It should be documented along with appropriate artefacts.
- Periodic review of configuration, agent reconciliation, architecture and infrastructure to ensure necessary compliance, prevent any interruption, automation opportunities and optimization.
- Conducting a thorough review release notes for both known, unknown issues and limitations pertaining to current and proposed releases.
- Offering insightful suggestions or resolutions should advisories align with the infrastructure.
- Post-upgrade, meticulously monitoring the Security Devices, validating application traffic, conducting essential testing, and documenting all artefacts generated
- Document open issues / limitations / bugs of the new security solution to circulate across all concern teams before completion of the handover.
- Execute routine device configuration and policy database updates adhering to InfoSec guidelines on a daily, weekly, and monthly basis as required.
- Prepare appropriate report along with PoA and discuss with concern stakeholders to ensure timely implementation of the automation or optimizations.
About GradRight
Our vision is to be the world’s leading Ed-Fin Tech company dedicated to making higher education accessible and affordable to all. Our mission is to drive transparency and accountability in the global higher education sector and create significant impact using the power of technology, data science and collaboration.
GradRight is the world’s first SaaS ecosystem that brings together students, universities and financial institutions in an integrated manner. It enables students to find and fund high return college education, universities to engage and select the best-fit students and banks to lend in an effective and efficient manner.
In the last three years, we have enabled students to get the best deals on a $ 2.8+ Billion of loan requests and facilitated disbursements of more than $ 350+ Million in loans. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards, UK.
GradRight’s team possesses extensive domestic and international experience in the launch and scale-up of premier higher education institutions. It is led by alumni of IIT Delhi, BITS Pilani, IIT Roorkee, ISB Hyderabad and University of Pennsylvania. GradRight is a Delaware, USA registered company with a wholly owned subsidiary in India.
About the Role
We are looking for a passionate DevOps Engineer with hands-on experience in AWS cloud infrastructure, containerization, and orchestration. The ideal candidate will be responsible for building, automating, and maintaining scalable cloud solutions, ensuring smooth CI/CD pipelines, and supporting development and operations teams.
Core Responsibilities
Design, implement, and manage scalable, secure, and highly available infrastructure on AWS.
Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions.
Containerize applications using Docker and manage deployments with Kubernetes (EKS, self-managed, or other distributions).
Monitor system performance, availability, and security using tools like CloudWatch, Prometheus, Grafana, ELK/EFK stack.
Collaborate with development teams to optimize application performance and deployment processes.
Required Skills & Experience
3–4 years of professional experience as a DevOps Engineer or similar role.
Strong expertise in AWS services (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, etc.).
Hands-on experience with Docker and Kubernetes (EKS or self-hosted clusters).
Proficiency in CI/CD pipeline design and automation.
Experience with Infrastructure as Code (Terraform / AWS CloudFormation).
Solid understanding of Linux/Unix systems and shell scripting.
Knowledge of monitoring, logging, and alerting tools.
Familiarity with networking concepts (DNS, Load Balancing, Security Groups, Firewalls).
Basic programming/scripting experience in Python, Bash, or Go.
Nice to Have
Exposure to microservices architecture and service mesh (Istio/Linkerd).
Knowledge of serverless (AWS Lambda, API Gateway).
Job Title: Senior AI Engineer
Job Summary:
We are seeking experienced Senior AI Engineers to join our AI team and drive the design, development, and deployment of cutting-edge AI solutions. You will work on exciting projects, such as sentiment analysis for support tickets, automated data insights, conversational interfaces, and zero-touch planning using AI. This role requires close collaboration with cross-functional teams, including Product and Data Engineering, to deliver impactful AI-driven features that transform our platform.
Key Responsibilities:
- Design, develop, deploy, and maintain ML models and AI infrastructure.
- Collaborate with cross-functional teams to integrate ML models into production workflows.
- Utilize AWS services, including Sage Maker and Bedrock, for model deployment and real-time monitoring.
- Implement and manage CI/CD pipelines to ensure efficient and reliable model deployment.
- Stay updated with the latest advancements in machine learning and AI best practices.
- Monitor and optimise model performance, addressing issues related to scalability and efficiency.
- Troubleshoot and resolve problems related to ML models and infrastructure.
Requirements:
- 5+ years of professional experience with Python programming.
- Hands-on experience with Machine Learning Operations (MLOps).
- Proven expertise in data engineering and ETL processes.
- Strong knowledge of AWS services, including Sage Maker 3wand Bedrock.
- Proficiency in setting up and managing Docker and CI/CD pipelines.
- Experience with large language models (LLMs) and prompt engineering.
- Familiarity with model performance monitoring and optimization techniques.
- Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment.
3+ years of experience developing Backends using NodeJS. Should be well versed with its asynchronous nature, event loop, promises, and callbacks.
Node expressProficient in Node.js and working knowledge of JavaScript, JQuery, AJAX, HTML, CSS, and frameworks ( React.js)
Experience in Node JS MVC framework and Rest API frameworks like Express.
Experience developing desktops.
Experience in Mongo DB.
Familiarity with using AWS components.
Postman for documenting/testing APIs.
Job brief:
Your primary role is to contribute to generating sales for our Client. In addition, you will be responsible for closing sales deals over the phone, maintaining good customer relationships, and maximising profitability.
Goals:
The main goal is to help the company grow by bringing in customers, building trust between customers, and creating a brand reputation.
Responsibilities
• Manage all inquiries and provide customer support on each platform; Phone & Email
• Develop your pitching strategies for different client bases for continually improving your closing ratio
• Maintain records of calls and sales/clients and note helpful information in CRM
• Educate customers on how solar can benefit them financially and its environmental impact
• Prepare and send quotations, daily follow-ups and Close Sales
• Handle grievances to preserve the company’s reputation by resolve customer complaints by investigating problems and developing solutions
• Go the “extra mile” to meet and exceed sales targets, KPI’s and facilitate future sales
• Monitor the company’s industry competitors, new products, and market conditions
• Set own targets, goals and forecast of sales at the start of each week, month and year and share with your team leader
• Actively seek out referrals from the existing customer & new inquiries/bookings
• Take a daily dose of achieving success (self-training) update your knowledge base and skillset
Requirements
• Proven experience as telesales representative inbound/outbound
• Proven track record of successfully meeting and exceeding sales targets
• Ability to learn about products and services and describe/explain them to prospects
• Excellent knowledge of English
• Excellent communication and interpersonal skills
• Cool-tempered and able to handle rejection
• Outstanding negotiation skills with the ability to resolve issues and address complaints
Pre-Requisites:
• Be able to work Australian Shift - 4:00 am - 1:00 pm
• Added Advantage if you have experience in Solar Industry
• Experience minimum 1 -5 years (Preferably from MNCs like Genpact, HCL, TCS, Infosys)
In house training will be provided.
Job Types: Full-time
Proven graphic designing experience -
• A strong portfolio of illustrations or other graphics
• Familiarity with design software and technologies (such as Adobe XD, Illustrator, Dreamweaver, Photoshop)
• A keen eye for aesthetics and details
• Excellent communication skills
• Ability to work methodically and meet deadlines
• Degree in Design, Fine Arts or related field is a plus
-Build pixel-perfect, buttery smooth UIs across both mobile platforms
-Familiarity with newer specifications of Ecma-script
-Thorough understanding of database, API's, GraphQL, caching layer, proxies, and other web services
-Experience with popular React workflows (such as Redux, Mobx, and Flux)
-Leverage native APIs for deep integrations with both platforms
-Diagnose and fix bugs and performance bottlenecks for performance that feels native
-Release applications to the Apple and Google Play stores
-Familiar with implementing crashlytics or any other bug tracking library
-Familiarity with modern front-end build pipelines and tools (like Jenkins, Github, etc)
-Experience with common front-end development tools such as Babel, Webpack, NPM, etc
-Familiarity with code versioning tools such as Git, SVN, and BitBucket
-Professional, precise communication skills











