

Technogen India PvtLtd
https://technogeninc.com/About
Connect with the team
Jobs at Technogen India PvtLtd

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Brief write up on how you could be a great fit for the role :
What your days and weeks will include.
- Agile scrum meetings, as appropriate, to track and drive individual and team accountability.
- Ad hoc collaboration sessions to define and adopt standards and best practices.
- A weekly ½ hour team huddle to discuss initiatives across the entire team.
- A weekly ½ hour team training session focused on comprehension of our team delivery model
- Conduct technical troubleshooting, maintenance, and operational support for production code
- Code! Test! Code! Test!
The skills that make you a great fit.
- Minimum of 5 years to 10 Years of experience as Eagle Investment Accounting (BNY Mellon product) as a Domain expert/QA Tester.
- Expert level of knowledge in ‘Capital Markets’ or ‘Investment of financial services.
- Must be self-aware, resilient, possess strong communications skills, and can lead.
- Strong experience release testing in an agile model
- Strong experience preparing test plans, test cases, and test summary reports.
- Strong experience in execution of smoke and regression test cases
- Strong experience utilizing and managing large and complex data sets.
- Experience with SQL Server and Oracle desired
- Good to Have knowledge on DevOps delivery model including roles and technologies desired

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Analyze client requirements and design customized Saviynt IGA solutions to meet their identity management needs
- Configure and implement Saviynt IGA platform features, including role-based access control, user provisioning, de-provisioning, certification campaigns, and entitlement management
- Collaborate with clients and internal stakeholders to understand business processes and translate them into effective IGA policies
- Assist in the integration of Saviynt IGA with existing identity and access management (IAM) systems, directories, and applications
- Conduct testing and troubleshooting to ensure the accuracy and effectiveness of the implemented IGA solution

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Daily and monthly responsibilities
- Review and coordinate with business application teams on data delivery requirements.
- Develop estimation and proposed delivery schedules in coordination with development team.
- Develop sourcing and data delivery designs.
- Review data model, metadata and delivery criteria for solution.
- Review and coordinate with team on test criteria and performance of testing.
- Contribute to the design, development and completion of project deliverables.
- Complete in-depth data analysis and contribution to strategic efforts
- Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.
Basic Qualifications
- Bachelor’s degree.
- 5+ years of data analysis working with business data initiatives.
- Knowledge of Structured Query Language (SQL) and use in data access and analysis.
- Proficient in data management including data analytical capability.
- Excellent verbal and written communications also high attention to detail.
- Experience with Python.
- Presentation skills in demonstrating system design and data analysis solutions.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

.Net Core is preferred along with SQL or very strong .Net developer with SQL
The job requires the resource to do documentation, SQL support, development with .Net,integration,interaction with end user- mix of all activities

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

We are looking for a Full Stack Developer-REMOTE Position
Strong knowledge of HTML, CSS, JavaScript, and JavaScript frameworks (e.g., React, Angular, Vue)
• Experience with backend technologies such as Node.js, DynamoDB, and PostgreSQL.
• Experience with version control systems (e.g., Git)
• Knowledge of web standards and accessibility guidelines
• Familiarity with server-side rendering and SEO best practices
Experience with AWS services, including AWS Lambda, API Gateway, and DynamoDB.
• Experience with Dev Ops- Infrastructure as Code, CI / CD, Test & Deployment Automation.
• Experience writing and maintaining a test suite throughout a project's lifecycle.
• Familiarity with Web Accessibility standards and technology
• Experience architecting and building Graph QL APIs and REST-full services.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Data Steward :
Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.
Primary Responsibilities:
- Responsible for data quality and data accuracy across all group/division delivery initiatives.
- Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
- Responsible for reviewing and governing data queries and DML.
- Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
- Accountable for the performance, quality, and alignment to requirements for all data query design and development.
- Responsible for defining standards and best practices for data analysis, modeling, and queries.
- Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
- Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
- Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
- Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
- Owns group's data assets including reports, data warehouse, etc.
- Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
- Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
- Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
- Responsible for solving data-related issues and communicating resolutions with other solution domains.
- Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
- Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
- Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
- Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
- Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.
Additional Responsibilities:
- Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
- Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
- Knowledge and understanding of Information Technology systems and software development.
- Experience with data modeling and test data management tools.
- Experience in the data integration project • Good problem solving & decision-making skills.
- Good communication skills within the team, site, and with the customer
Knowledge, Skills and Abilities
- Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
- Solid understanding of key DBMS platforms like SQL Server, Azure SQL
- Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
- Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
- Experience in Report and Dashboard development
- Statistical and Machine Learning models
- Python (sklearn, numpy, pandas, genism)
- Nice to Have:
- 1yr of ETL experience
- Natural Language Processing
- Neural networks and Deep learning
- xperience in keras,tensorflow,spacy, nltk, LightGBM python library
Interaction : Frequently interacts with subordinate supervisors.
Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required
Experience : 7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

- Sr. Data Engineer:
Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python
Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred
Major accountabilities:
- Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
- Have good understanding on Foundry Platform landscape and it’s capabilities
- Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
- Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
- Designs data integrations and data quality framework.
- Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
- Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
- Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed
Desired Candidate Profile :
- Strong data engineering background
- Experience with Clinical Data Model is preferred
- Experience in
- SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
- Java and Groovy for our back-end applications and data integration tools
- Python for data processing and analysis
- Cloud infrastructure based on AWS EC2 and S3
- 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
- 5+ years of Python and Pyspark development experience
- Strong troubleshooting and problem solving skills
- BTech or master's degree in computer science or a related technical field
- Experience designing, building, and maintaining big data pipelines systems
- Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
- Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
- Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
- Hand-on experience in AWS / Azure cloud platform and stack
- Strong in API based architecture and concept, able to do quick PoC using API integration and development
- Knowledge of machine learning and AI
- Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.
Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.


Similar companies
About the company
MyOperator is India's leading cloud communications provider, empowering over 10,000 businesses across industries with cutting-edge SaaS solutions. Our offerings include Cloud Call Center, IVR, Toll-free Numbers, Enterprise Mobility, WhatsApp Business Solutions, and Heyo Phone. We are committed to delivering innovation and exceptional customer service that drives business success.
Jobs
29
About the company
We are the fastest growing all-in-one platform for SMB's and digital marketing agencies. We offer services related to CRM, Email, 2-way SMS, phone system, Facebook, Instagram, WhatsApp, Email marketing, Social media posting, Websites, Funnel Builder, Wordpress hosting & more!
We have a very strong and independent team. We value tinkerers and people with an entrepreneurial spirit. We want people to come to work and explore their curiosity every day. Our growth offers a unique opportunity for the right individual to scale and build world class products.
About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have 1000+ employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We work at scale; our infrastructure handles around 3 Billion+ API hits & 2 Billion+ message events monthly and over 25M views of customer pages daily. We also handle over 80 Terabytes of data across 5 Databases.
About the Team:
Currently we have millions of sales funnels, websites, attributions, forms and survey tools for lead generation. Our B2B customers use these tools to bring in the leads to the HighLevel CRM system. We are working to continuously improve the functionality of these tools to solve our customers’ business needs. In this role, you will be expected to be autonomous, guide other developers who might need technical help, collaborate with other technical teams, product, support and customer success
Some of the perks we offer:
- 100 % remote
- Uncapped leave policy
- WFH setup
- Champion big problems
Jobs
15
About the company
Jobs
8
About the company
We are an InsurTech start-up based out of Bangalore, with a focus on Healthcare. CoverSelf empowers healthcare insurance companies with a truly NEXT-GEN cloud-native, holistic & customizable platform preventing and adapting to the ever-evolving claims & payment inaccuracies. Reduce complexity and administrative costs with a unified healthcare dedicated platform.
Jobs
3
About the company
Averlon is an innovative cloud security company that revolutionizes how organizations approach cybersecurity. Their platform performs deep graph analysis of cloud environments, mapping potential attack paths from code to cloud. By focusing on true risks rather than overwhelming users with alerts, Averlon enables security teams to identify and mitigate real-world threats efficiently.
The company's approach combines panoptic visibility, attack chain analysis, and rapid remediation capabilities. Trusted by leading enterprises, Averlon's solution stands out for its ability to cut through noise, prioritize critical vulnerabilities, and significantly reduce response times. This unique approach allows organizations to gain a clear picture of their security landscape quickly, empowering them to focus on the most impactful security measures in an increasingly complex cloud environment.
Jobs
1
About the company
Raising Superstars is an edtech company focused on early childhood development (0-6 years). They provide home-based learning programs using their "Prodigy Framework" which takes only 5 minutes per day. The company was recently ranked #4 globally by TIME Magazine in their list of upcoming education companies.
Jobs
1
About the company
Asha Health is a Y Combinator backed AI healthcare startup. We help medical practices spin up their own AI clinic. We've raised an oversubscribed seed round backed by top Silicon Valley investors, and are growing rapidly. Our team consists of AI product experts from companies like Google, as well as senior physician executives from major health systems.
Jobs
3
About the company
OpenIAM is a pioneering Identity and Access Management (IAM) solutions provider that has been transforming enterprise security since 2008. Based in New York, this self-funded and profitable company has established itself as an innovator in the IAM space, being the first to introduce a converged architecture stack and fully containerized suite for cloud environments. With a global presence and partnerships with major systems integrators like Thales and Indra, OpenIAM serves mid to large enterprises across various sectors including financial services, healthcare, education, and manufacturing.
Jobs
1
About the company
Jobs
3